umu.sePublications
Change search
Refine search result
1 - 10 of 10
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Lu, Zhihan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Royal Institute of Technology (KTH), Stockholm, Sweden.
    Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera2013In: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013, 2013, p. 149-153Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).

  • 2.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Lu, Zhihan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Tele-embodied agent (TEA) for video teleconferencing2013In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia 2013, Luleå, Sweden, New York, 2013Conference paper (Refereed)
    Abstract [en]

    We propose a design of teleconference system which expressnonverbal behavior (in our case head gesture) along withaudio-video communication. Previous audio-video confer-encing systems are abortive in presenting nonverbal behav-iors which we, as human, usually use in face to face in-teraction. Recently, research in teleconferencing systemshas expanded to include nonverbal cues of remote person intheir distance communication. The accurate representationof non-verbal gestures for such systems is still challengingbecause they are dependent on hand-operated devices (likemouse or keyboard). Furthermore, they still lack in present-ing accurate human gestures. We believe that incorporatingembodied interaction in video teleconferencing, (i.e., usingthe physical world as a medium for interacting with digi-tal technology) can result in nonverbal behavior represen-tation. The experimental platform named Tele-EmbodiedAgent (TEA) is introduced which incorperates remote per-son’s head gestures to study new paradigm of embodied in-teraction in video teleconferencing. Our preliminary testshows accuracy (with respect to pose angles) and efficiency(with respect to time) of our proposed design. TEA canbe used in medical field, factories, offices, gaming industry,music industry and for training.

  • 3.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. Chinese Academy of Science, China.
    Halawani, Alaa
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. Palestine Polytechnic University.
    Feng, Shengzhong
    Chinese Academy of Science, China.
    Li, Haibo
    Royal Institute of Technology, Stockholm, Sweden.
    Ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Multimodal Hand and Foot Gesture Interaction for Handheld Devices2014In: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), ISSN 1551-6857, E-ISSN 1551-6865, Vol. 11, no 1, article id 10Article in journal (Refereed)
    Abstract [en]

    We present a hand-and-foot-based multimodal interaction approach for handheld devices. Our method combines input modalities (i.e., hand and foot) and provides a coordinated output to both modalities along with audio and video. Human foot gesture is detected and tracked using contour-based template detection (CTD) and Tracking-Learning-Detection (TLD) algorithm. 3D foot pose is estimated from passive homography matrix of the camera. 3D stereoscopic and vibrotactile are used to enhance the immersive feeling. We developed a multimodal football game based on the multimodal approach as a proof-of-concept. We confirm our systems user satisfaction through a user study.

  • 4.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Halawani, Alaa
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Khan, Muhammad Sikandar Lal
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Rehman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Finger in air: touch-less interaction on smartphone2013In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia2013, Luleå, Sweden, luleå, sweden, 2013Conference paper (Refereed)
    Abstract [en]

    In this paper we present a vision based intuitive interactionmethod for smart mobile devices. It is based on markerlessfinger gesture detection which attempts to provide a ‘naturaluser interface’. There is no additional hardware necessaryfor real-time finger gesture estimation. To evaluate thestrengths and effectiveness of proposed method, we designtwo smart phone applications, namely circle menu application- provides user with graphics and smart phone’s statusinformation, and bouncing ball game- a finger gesture basedbouncing ball application. The users interact with these applicationsusing finger gestures through the smart phone’scamera view, which trigger the interaction event and generateactivity sequences for interactive buffers. Our preliminaryuser study evaluation demonstrates effectiveness andthe social acceptability of proposed interaction approach.

  • 5.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Sikandar Lal Khan, Muhammad
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Hand and Foot Gesture Interaction for Handheld Devices2013In: MM '13 Proceedings of the 21st ACM international conference on Multimedia, New York, NY, USA: ACM , 2013, p. 621-624Conference paper (Refereed)
    Abstract [en]

    In this paper we present hand and foot based immersive multimodal interaction approach for handheld devices. A smart phone based immersive football game is designed as a proof of concept. Our proposed method combines input modalities (i.e. hand & foot) and provides a coordinated output to both modalities along with audio and video. In this work, human foot gesture is detected and tracked using template matching method and Tracking-Learning-Detection (TLD) framework. We evaluated our system's usability through a user study in which we asked participants to evaluate proposed interaction method. Our preliminary evaluation demonstrates the efficiency and ease of use of proposed multimodal interaction approach.

  • 6.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. SIAT, Chinese Academy of Science, China.
    ur Rehman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. SIAT, Chinese Academy of Science, China.
    Multi-Gesture based Football Game in Smart Phones2013In: SA '13 SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications, NY, USA: Association for Computing Machinery (ACM), 2013Conference paper (Refereed)
  • 7.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Touch-less interaction smartphone on go!2013In: Proceeding of SIGGRAPH Asia 2013, ACM New York, NY, USA, 2013Conference paper (Refereed)
    Abstract [en]

    A smartphone touch-less interaction based on mixed hardware and software is proposed in this work. The software application renders circle menu application graphics and status information using smart phone’s screen, audio. Augmented reality image rendering technology is employed for a convenient finger-phone interaction. The users interact with the application using finger gesture motion behind the camera, which trigger the interaction event and generate activity sequences for interactive buffers. The combination of Contour based Template Matching (CTM) and Tracking-Learning-Detection (TLD) provides a core support for hand-gesture interaction by accurately detecting and tracking the hand gesture.

  • 8.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Chen, Ge
    Ocean University China, Qingdao.
    WebVRGIS: a P2P network engine for VR data and GIS analysis2013In: Lecture Notes in Computer Science: Neural Information Processing / [ed] Minho Lee, Akira Hirose, Zeng-Guang Hou, Rhee Man Kil, Springer Berlin Heidelberg, 2013, p. 503-510Conference paper (Refereed)
    Abstract [en]

    A Peer-to-peer(P2P) network engine for geographic VR data and GIS analysis on 3D Globe is proposed, which synthesizes several latest information technologies including web virtual reality(VR), 3D geographical information system(GIS), 3D visualization and P2P network. The engine is used to organize and present massive spatial data such as remote sensing data, meanwhile to share and online publish by P2P based on hash. The P2P network makes a mapping of the users in real geographic space and the user avatar in the virtual scene, as well as the nodes in the virtual network. It also supports the integrated VRGIS functions including 3D spatial analysis functions, 3D visualization for spatial process and serves as a web engine for 3D globe and digital city.

  • 9.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Chen, Ge
    Ocean University of China, Qingdao, China.
    WebVRGIS: WebGIS based interactive online 3D virtual community2013In: 2013 International Conference on Virtual Reality and Visualization (ICVRV 2013), Institute of Electrical and Electronics Engineers (IEEE), 2013, p. 94-99Conference paper (Refereed)
    Abstract [en]

    In this paper we present a WebVRGIS based Interactive Online 3D Virtual Community which is achieved based on WebGIS technology and web VR technology. It is Multi-Dimensional(MD) web geographic information system (WebGIS) based 3D interactive online virtual community which is a virtual real-time 3D communication systems and web systems development platform. It is capable of running on a variety of browsers. In this work, four key issues are studied: (1) Multi-source MD geographical data fusion of the WebGIS, (2) scene combination with 3D avatar, (3) massive data network dispatch, and (4) multi-user avatar real-time interactive. Our system is divided into three modules: data preprocessing, background management and front end user interaction. The core of the front interaction module is packaged in the MD map expression engine 3GWebMapper and the free plug-in network 3D rendering engine WebFlashVR. We have evaluated the robustness of our system on three campus of Ocean University of China(OUC) as a testing base. The results shows high efficiency, easy to use and robustness of our system.

  • 10.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Khan, Muhammad Sikandar Lal
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Royal Institute of Technology (KTH), Sweden..
    Anaglyph 3D stereoscopic visualization of 2D video based on fundamental matrix2013In: Proceedings of 2013 International Conferenceon Virtual Reality and Visualization (ICVRV 2013), 14-15 September 2013, Xi'an, Shaanxi, China, 2013Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a simple Anaglyph 3Dstereo generation algorithm from 2D video sequence with monocularcamera. In our novel approach we employ camera poseestimation method to directly generate stereoscopic 3D from 2Dvideo without building depth map explicitly. Our cost effectivemethod is suitable for arbitrary real-world video sequence andproduces smooth results. We use image stitching based on planecorrespondence using fundamental matrix. To this end we alsodemonstrate that correspondence plane image stitching based onHomography matrix only cannot generate better result. Furthermore,we utilize the structure from motion (with fundamentalmatrix) based reconstructed camera pose model to accomplishvisual anaglyph 3D illusion. The proposed approach demonstratesa very good performance for most of the video sequences.

1 - 10 of 10
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf