umu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Lu, Zhihan
Publications (10 of 10) Show all publications
Lu, Z., Halawani, A., Feng, S., Li, H. & Ur Réhman, S. (2014). Multimodal Hand and Foot Gesture Interaction for Handheld Devices. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), 11(1), Article ID 10.
Open this publication in new window or tab >>Multimodal Hand and Foot Gesture Interaction for Handheld Devices
Show others...
2014 (English)In: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), ISSN 1551-6857, E-ISSN 1551-6865, Vol. 11, no 1, article id 10Article in journal (Refereed) Published
Abstract [en]

We present a hand-and-foot-based multimodal interaction approach for handheld devices. Our method combines input modalities (i.e., hand and foot) and provides a coordinated output to both modalities along with audio and video. Human foot gesture is detected and tracked using contour-based template detection (CTD) and Tracking-Learning-Detection (TLD) algorithm. 3D foot pose is estimated from passive homography matrix of the camera. 3D stereoscopic and vibrotactile are used to enhance the immersive feeling. We developed a multimodal football game based on the multimodal approach as a proof-of-concept. We confirm our systems user satisfaction through a user study.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2014
Keywords
Human Factors, Multimodal interaction, smartphone games, gesture estimation, HCI, mobile, vibrotactile
National Category
Media and Communication Technology Computer Systems Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-94057 (URN)10.1145/2645860 (DOI)000343984800002 ()
Available from: 2014-10-03 Created: 2014-10-03 Last updated: 2018-06-07Bibliographically approved
Lu, Z., ur Réhman, S., Khan, M. S. & Li, H. (2013). Anaglyph 3D stereoscopic visualization of 2D video based on fundamental matrix. In: Proceedings of 2013 International Conferenceon Virtual Reality and Visualization (ICVRV 2013), 14-15 September 2013, Xi'an, Shaanxi, China: . Paper presented at International Conferenceon Virtual Reality and Visualization (ICVRV 2013),.
Open this publication in new window or tab >>Anaglyph 3D stereoscopic visualization of 2D video based on fundamental matrix
2013 (English)In: Proceedings of 2013 International Conferenceon Virtual Reality and Visualization (ICVRV 2013), 14-15 September 2013, Xi'an, Shaanxi, China, 2013Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, we propose a simple Anaglyph 3Dstereo generation algorithm from 2D video sequence with monocularcamera. In our novel approach we employ camera poseestimation method to directly generate stereoscopic 3D from 2Dvideo without building depth map explicitly. Our cost effectivemethod is suitable for arbitrary real-world video sequence andproduces smooth results. We use image stitching based on planecorrespondence using fundamental matrix. To this end we alsodemonstrate that correspondence plane image stitching based onHomography matrix only cannot generate better result. Furthermore,we utilize the structure from motion (with fundamentalmatrix) based reconstructed camera pose model to accomplishvisual anaglyph 3D illusion. The proposed approach demonstratesa very good performance for most of the video sequences.

Keywords
Anaglyph, 3D video, 2D to 3D conversion.
National Category
Embedded Systems Computer Vision and Robotics (Autonomous Systems) Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-83874 (URN)
Conference
International Conferenceon Virtual Reality and Visualization (ICVRV 2013),
Available from: 2013-12-10 Created: 2013-12-10 Last updated: 2018-06-08Bibliographically approved
Lu, Z., Halawani, A., Khan, M. S., ur Rehman, S. & Li, H. (2013). Finger in air: touch-less interaction on smartphone. In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia2013, Luleå, Sweden: . Paper presented at 12th International Conference on Mobile and Ubiquitous Multimedia (MUM 2013). luleå, sweden
Open this publication in new window or tab >>Finger in air: touch-less interaction on smartphone
Show others...
2013 (English)In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia2013, Luleå, Sweden, luleå, sweden, 2013Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a vision based intuitive interactionmethod for smart mobile devices. It is based on markerlessfinger gesture detection which attempts to provide a ‘naturaluser interface’. There is no additional hardware necessaryfor real-time finger gesture estimation. To evaluate thestrengths and effectiveness of proposed method, we designtwo smart phone applications, namely circle menu application- provides user with graphics and smart phone’s statusinformation, and bouncing ball game- a finger gesture basedbouncing ball application. The users interact with these applicationsusing finger gestures through the smart phone’scamera view, which trigger the interaction event and generateactivity sequences for interactive buffers. Our preliminaryuser study evaluation demonstrates effectiveness andthe social acceptability of proposed interaction approach.

Place, publisher, year, edition, pages
luleå, sweden: , 2013
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-83870 (URN)978-1-4503-2648 (ISBN)
Conference
12th International Conference on Mobile and Ubiquitous Multimedia (MUM 2013)
Available from: 2013-12-10 Created: 2013-12-10 Last updated: 2018-06-08Bibliographically approved
Lu, Z., Sikandar Lal Khan, M. & ur Réhman, s. (2013). Hand and Foot Gesture Interaction for Handheld Devices. In: MM '13 Proceedings of the 21st ACM international conference on Multimedia: . Paper presented at 21st ACM International Conference on Multimedia (pp. 621-624). New York, NY, USA: ACM
Open this publication in new window or tab >>Hand and Foot Gesture Interaction for Handheld Devices
2013 (English)In: MM '13 Proceedings of the 21st ACM international conference on Multimedia, New York, NY, USA: ACM , 2013, p. 621-624Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present hand and foot based immersive multimodal interaction approach for handheld devices. A smart phone based immersive football game is designed as a proof of concept. Our proposed method combines input modalities (i.e. hand & foot) and provides a coordinated output to both modalities along with audio and video. In this work, human foot gesture is detected and tracked using template matching method and Tracking-Learning-Detection (TLD) framework. We evaluated our system's usability through a user study in which we asked participants to evaluate proposed interaction method. Our preliminary evaluation demonstrates the efficiency and ease of use of proposed multimodal interaction approach.

Place, publisher, year, edition, pages
New York, NY, USA: ACM, 2013
Keywords
immersive multimodal interaction, smart phone games, foot gesture, HCI, mobile, vibrotactile
National Category
Signal Processing
Research subject
Computerized Image Analysis; Electronics; människa-dator interaktion
Identifiers
urn:nbn:se:umu:diva-82189 (URN)10.1145/2502081.2502163 (DOI)978-1-4503-2404-5 (ISBN)
Conference
21st ACM International Conference on Multimedia
Available from: 2013-10-28 Created: 2013-10-28 Last updated: 2018-06-08Bibliographically approved
Khan, M. S., ur Réhman, S., Lu, Z. & Li, H. (2013). Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera. In: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013: . Paper presented at The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013 (pp. 149-153).
Open this publication in new window or tab >>Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera
2013 (English)In: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013, 2013, p. 149-153Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).

Keywords
Head pose estimation, 3D geometric modeling, human motion analysis
National Category
Signal Processing
Research subject
Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-82187 (URN)10.12792/icisip2013.031 (DOI)
Conference
The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013
Available from: 2013-10-28 Created: 2013-10-28 Last updated: 2018-06-08
Lu, Z. & ur Rehman, S. (2013). Multi-Gesture based Football Game in Smart Phones. In: SA '13 SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications: . Paper presented at SIGGRAPH Asia 2013S, Symposium on Mobile Graphics and Interactive, 2013 Hongkong.. NY, USA: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Multi-Gesture based Football Game in Smart Phones
2013 (English)In: SA '13 SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications, NY, USA: Association for Computing Machinery (ACM), 2013Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
NY, USA: Association for Computing Machinery (ACM), 2013
National Category
Embedded Systems Human Computer Interaction Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-83877 (URN)10.1145/2543651.2543677 (DOI)978-1-4503-2633-9 (ISBN)
Conference
SIGGRAPH Asia 2013S, Symposium on Mobile Graphics and Interactive, 2013 Hongkong.
Available from: 2013-12-10 Created: 2013-12-10 Last updated: 2018-06-08Bibliographically approved
Khan, M. S., ur Réhman, S., Lu, Z. & Li, H. (2013). Tele-embodied agent (TEA) for video teleconferencing. In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia 2013, Luleå, Sweden: . Paper presented at 12th International Conference on Mobile and Ubiquitous Multimedia 2013, Luleå, Sweden. New York
Open this publication in new window or tab >>Tele-embodied agent (TEA) for video teleconferencing
2013 (English)In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia 2013, Luleå, Sweden, New York, 2013Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

We propose a design of teleconference system which expressnonverbal behavior (in our case head gesture) along withaudio-video communication. Previous audio-video confer-encing systems are abortive in presenting nonverbal behav-iors which we, as human, usually use in face to face in-teraction. Recently, research in teleconferencing systemshas expanded to include nonverbal cues of remote person intheir distance communication. The accurate representationof non-verbal gestures for such systems is still challengingbecause they are dependent on hand-operated devices (likemouse or keyboard). Furthermore, they still lack in present-ing accurate human gestures. We believe that incorporatingembodied interaction in video teleconferencing, (i.e., usingthe physical world as a medium for interacting with digi-tal technology) can result in nonverbal behavior represen-tation. The experimental platform named Tele-EmbodiedAgent (TEA) is introduced which incorperates remote per-son’s head gestures to study new paradigm of embodied in-teraction in video teleconferencing. Our preliminary testshows accuracy (with respect to pose angles) and efficiency(with respect to time) of our proposed design. TEA canbe used in medical field, factories, offices, gaming industry,music industry and for training.

Place, publisher, year, edition, pages
New York: , 2013
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-83871 (URN)978-1-4503-2648 (ISBN)
Conference
12th International Conference on Mobile and Ubiquitous Multimedia 2013, Luleå, Sweden
Available from: 2013-12-10 Created: 2013-12-10 Last updated: 2018-06-08Bibliographically approved
Lu, Z. & ur Réhman, S. (2013). Touch-less interaction smartphone on go!. In: Proceeding of SIGGRAPH Asia 2013: . Paper presented at SIGGRAPH Asia 2013. ACM New York, NY, USA
Open this publication in new window or tab >>Touch-less interaction smartphone on go!
2013 (English)In: Proceeding of SIGGRAPH Asia 2013, ACM New York, NY, USA, 2013Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

A smartphone touch-less interaction based on mixed hardware and software is proposed in this work. The software application renders circle menu application graphics and status information using smart phone’s screen, audio. Augmented reality image rendering technology is employed for a convenient finger-phone interaction. The users interact with the application using finger gesture motion behind the camera, which trigger the interaction event and generate activity sequences for interactive buffers. The combination of Contour based Template Matching (CTM) and Tracking-Learning-Detection (TLD) provides a core support for hand-gesture interaction by accurately detecting and tracking the hand gesture.

Place, publisher, year, edition, pages
ACM New York, NY, USA: , 2013
National Category
Embedded Systems Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-83879 (URN)10.1145/2542302.2542336 (DOI)978-1-4503-2634-6 (ISBN)
Conference
SIGGRAPH Asia 2013
Available from: 2013-12-10 Created: 2013-12-10 Last updated: 2018-06-08Bibliographically approved
Lu, Z., ur Réhman, S. & Chen, G. (2013). WebVRGIS: a P2P network engine for VR data and GIS analysis. In: Minho Lee, Akira Hirose, Zeng-Guang Hou, Rhee Man Kil (Ed.), Lecture Notes in Computer Science: Neural Information Processing. Paper presented at 20th International Conference, ICONIP 2013, Daegu, Korea, November 3-7, 2013 (pp. 503-510). Springer Berlin Heidelberg
Open this publication in new window or tab >>WebVRGIS: a P2P network engine for VR data and GIS analysis
2013 (English)In: Lecture Notes in Computer Science: Neural Information Processing / [ed] Minho Lee, Akira Hirose, Zeng-Guang Hou, Rhee Man Kil, Springer Berlin Heidelberg, 2013, p. 503-510Conference paper, Published paper (Refereed)
Abstract [en]

A Peer-to-peer(P2P) network engine for geographic VR data and GIS analysis on 3D Globe is proposed, which synthesizes several latest information technologies including web virtual reality(VR), 3D geographical information system(GIS), 3D visualization and P2P network. The engine is used to organize and present massive spatial data such as remote sensing data, meanwhile to share and online publish by P2P based on hash. The P2P network makes a mapping of the users in real geographic space and the user avatar in the virtual scene, as well as the nodes in the virtual network. It also supports the integrated VRGIS functions including 3D spatial analysis functions, 3D visualization for spatial process and serves as a web engine for 3D globe and digital city.

Place, publisher, year, edition, pages
Springer Berlin Heidelberg: , 2013
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8226
Keywords
P2P network, WebVR, VRGIS, Big data, 3D Globe
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-83880 (URN)10.1007/978-3-642-42054-2_63 (DOI)978-3-642-42053-5 (ISBN)
Conference
20th International Conference, ICONIP 2013, Daegu, Korea, November 3-7, 2013
Available from: 2013-12-10 Created: 2013-12-10 Last updated: 2018-06-08Bibliographically approved
Lu, Z., ur Réhman, S. & Chen, G. (2013). WebVRGIS: WebGIS based interactive online 3D virtual community. In: 2013 International Conference on Virtual Reality and Visualization (ICVRV 2013): . Paper presented at 2013 International Conference on Virtual Reality and Visualization (ICVRV 2013), Xian, Shaanxi, China, 14–15 September 2013 (pp. 94-99). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>WebVRGIS: WebGIS based interactive online 3D virtual community
2013 (English)In: 2013 International Conference on Virtual Reality and Visualization (ICVRV 2013), Institute of Electrical and Electronics Engineers (IEEE), 2013, p. 94-99Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present a WebVRGIS based Interactive Online 3D Virtual Community which is achieved based on WebGIS technology and web VR technology. It is Multi-Dimensional(MD) web geographic information system (WebGIS) based 3D interactive online virtual community which is a virtual real-time 3D communication systems and web systems development platform. It is capable of running on a variety of browsers. In this work, four key issues are studied: (1) Multi-source MD geographical data fusion of the WebGIS, (2) scene combination with 3D avatar, (3) massive data network dispatch, and (4) multi-user avatar real-time interactive. Our system is divided into three modules: data preprocessing, background management and front end user interaction. The core of the front interaction module is packaged in the MD map expression engine 3GWebMapper and the free plug-in network 3D rendering engine WebFlashVR. We have evaluated the robustness of our system on three campus of Ocean University of China(OUC) as a testing base. The results shows high efficiency, easy to use and robustness of our system.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2013
Keywords
Virtual Reality, Geographic Information System, Virtual Community, Virtual Geographic Environment
National Category
Embedded Systems Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-83872 (URN)10.1109/ICVRV.2013.23 (DOI)000330838000015 ()
Conference
2013 International Conference on Virtual Reality and Visualization (ICVRV 2013), Xian, Shaanxi, China, 14–15 September 2013
Available from: 2013-12-10 Created: 2013-12-10 Last updated: 2018-06-08Bibliographically approved
Organisations

Search in DiVA

Show all publications