umu.sePublications
Change search
Refine search result
1 - 19 of 19
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Khan, Muhammad Sikandar Lal
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Enabling physical action in computer mediated communication: an embodied interaction approach2015Licentiate thesis, comprehensive summary (Other academic)
  • 2.
    Khan, Muhammad Sikandar Lal
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Presence through actions: theories, concepts, and implementations2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    During face-to-face meetings, humans use multimodal information, including verbal information, visual information, body language, facial expressions, and other non-verbal gestures. In contrast, during computer-mediated-communication (CMC), humans rely either on mono-modal information such as text-only, voice-only, or video-only or on bi-modal information by using audiovisual modalities such as video teleconferencing. Psychologically, the difference between the two lies in the level of the subjective experience of presence, where people perceive a reduced feeling of presence in the case of CMC. Despite the current advancements in CMC, it is still far from face-to-face communication, especially in terms of the experience of presence.

    This thesis aims to introduce new concepts, theories, and technologies for presence design where the core is actions for creating presence. Thus, the contribution of the thesis can be divided into a technical contribution and a knowledge contribution. Technically, this thesis details novel technologies for improving presence experience during mediated communication (video teleconferencing). The proposed technologies include action robots (including a telepresence mechatronic robot (TEBoT) and a face robot), embodied control techniques (head orientation modeling and virtual reality headset based collaboration), and face reconstruction/retrieval algorithms. The introduced technologies enable action possibilities and embodied interactions that improve the presence experience between the distantly located participants. The novel setups were put into real experimental scenarios, and the well-known social, spatial, and gaze related problems were analyzed.

    The developed technologies and the results of the experiments led to the knowledge contribution of this thesis. In terms of knowledge contribution, this thesis presents a more general theoretical conceptual framework for mediated communication technologies. This conceptual framework can guide telepresence researchers toward the development of appropriate technologies for mediated communication applications. Furthermore, this thesis also presents a novel strong concept – presence through actions - that brings in philosophical understandings for developing presence- related technologies. The strong concept - presence through actions is an intermediate-level knowledge that proposes a new way of creating and developing future 'presence artifacts'. Presence- through actions is an action-oriented phenomenological approach to presence that differs from traditional immersive presence approaches that are based (implicitly) on rationalist, internalist views.

  • 3.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Halawani, Alaa
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. Computer Engineering Department, Palestine Polytechnic University, Hebron 90100, Palestine.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Action Augmented Real Virtuality Design for Presence2018In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 10, no 4, p. 961-972Article in journal (Refereed)
    Abstract [en]

    This paper addresses the important question of how to design a video teleconferencing setup to increase the experience of spatial and social presence. Traditional video teleconferencing setups are lacking in presenting the nonverbal behaviors that humans express in face-to-face communication, which results in decrease in presence-experience. In order to address this issue, we first present a conceptual framework of presence for video teleconferencing. We introduce a modern presence concept called real virtuality and propose a new way of achieving this based on body or artifact actions to increase the feeling of presence, and we named this concept presence through actions. Using this new concept, we present the design of a novel action-augmented real virtuality prototype that considers the challenges related to the design of an action prototype, action embodiment, and face representation. Our action prototype is a telepresence mechatronic robot (TEBoT), and action embodiment is through a head-mounted display (HMD). The face representation solves the problem of face occlusion introduced by the HMD. The novel combination of HMD, TEBoT, and face representation algorithm has been tested in a real video teleconferencing scenario for its ability to solve the challenges related to spatial and social presence. We have performed a user study where the invited participants were requested to experience our novel setup and to compare it with a traditional video teleconferencing setup. The results show that the action capabilities not only increase the feeling of spatial presence but also increase the feeling of social presence of a remote person among local collaborators.

  • 4.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Réhman, Shafiq ur
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Embodied tele-presence system (ETS): designing tele-presence for video teleconferencing2014In: Design, user experience, and usability: User experience design for diverse interaction platforms and environments / [ed] Aaron Marcus, Springer International Publishing Switzerland, 2014, Vol. 8518, p. 574-585Conference paper (Refereed)
    Abstract [en]

    In spite of the progress made in tele conferencing over the last decades, however, it is still far from a resolved issue. In this work, we present an intuitive video teleconferencing system, namely - Embodied Tele-Presence System (ETS) which is based on embodied interaction concept. This work proposes the results of a user study considering the hypothesis: “ Embodied interaction based video conferencing system performs better than the standard video conferencing system in representing nonverbal behaviors, thus creating a ‘feeling of presence’ of a remote person among his/her local collaborators”. Our ETS integrates standard audio-video conferencing with mechanical embodiment of head gestures of a remote person (as nonverbal behavior) to enhance the level of interaction. To highlight the technical challenges and design principles behind such tele-presence systems, we have also performed a system evaluation which shows the accuracy and efficiency of our ETS design. The paper further provides an overview of our case study and an analysis of our user evaluation. The user study shows that the proposed embodied interaction approach in video teleconferencing increases ‘in-meeting interaction’ and enhance a ‘feeling of presence’ among remote participant and his collaborators.

  • 5.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Gaze perception and awareness in smart devices2016In: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 92-93, p. 55-65Article in journal (Refereed)
    Abstract [en]

    Eye contact and gaze awareness play a significant role for conveying emotions and intentions duringface-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However,the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the(i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen isunable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers haveproposed different hardware setups with complex software algorithms. The most recent solution foraccurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, todaycommonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is aneed to improve gaze awareness/perception in these smart devices. In this work, we have revisited thequestion; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis isthat ‘an accurate gaze perception can be achieved by the ‘3D embodiment’ of a remote user's head gestureduring video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3Dembodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smartdevice (tablet PC). The electromechanical platform in combination with a smart device is a novel setupthat is used for studying gaze awareness/perception in 2D screen-based smart devices during videoteleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-LisaGaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii)‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between theobserving person and the object by an actor. Our results confirm that the 3D embodiment of a remoteuser head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness,hence, accurately projecting the human gaze in distant space.

  • 6.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH Royal Institute of Technology.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Tele-Immersion: Virtual Reality based Collaboration2016In: HCI International 2016: Posters' Extended Abstracts : 18th International Conference, HCI International 2016, Toronto, Canada, July 17-22, 2016, Proceedings, Part I / [ed] Constantine Stephanidis, Springer, 2016, p. 352-357Conference paper (Refereed)
    Abstract [en]

    The 'perception of being present in another space' duringvideo teleconferencing is a challenging task. This work makes an effortto improve upon a user perception of being 'present' in another space byemploying a virtual reality (VR) headset and an embodied telepresencesystem (ETS). In our application scenario, a remote participant usesa VR headset to collaborate with local collaborators. At a local site,an ETS is used as a physical representation of the remote participantamong his/her local collaborators. The head movements of the remoteperson is mapped and presented by the ETS along with audio-video com-munication. Key considerations of complete design are discussed, wheresolutions to challenges related to head tracking, audio-video communi-cation and data communication are presented. The proposed approachis validated by the user study where quantitative analysis is done onimmersion and presence parameters.

  • 7.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system2016In: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967, Vol. 31, no 5, p. 2597-2610Article in journal (Refereed)
    Abstract [en]

    Socially interactive systems are embodied agents that engage in social interactions with humans. From a design perspective, these systems are built by considering a biologically inspired design (Bio-inspired) that can mimic and simulate human-like communication cues and gestures. The design of a bio-inspired system usually consists of (i) studying biological characteristics, (ii) designing a similar biological robot, and (iii) motion planning, that can mimic the biological counterpart. In this article, we present a design, development, control-strategy and verification of our socially interactive bio-inspired robot, namely - Telepresence Mechatronic Robot (TEBoT). The key contribution of our work is an embodiment of a real human-neck movements by, i) designing a mechatronic platform based on the dynamics of a real human neck and ii) capturing the real head movements through our novel single-camera based vision algorithm. Our socially interactive bio-inspired system is based on an intuitive integration-design strategy that combines computer vision based geometric head pose estimation algorithm, model based design (MBD) approach and real-time motion planning techniques. We have conducted an extensive testing to demonstrate effectiveness and robustness of our proposed system.

  • 8.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Distance Communication: Trends and Challenges and How to Resolve them2014In: Strategies for a creative future with computer science, quality design and communicability / [ed] Francisco V. C. Ficarra, Kim Veltman, Kaoru Sumi, Jacqueline Alma, Mary Brie, Miguel C. Ficarra, Domen Verber, Bojan Novak, and Andreas Kratky, Italy: Blue Herons Editions , 2014Chapter in book (Refereed)
    Abstract [en]

    Distance communication is becoming an important part of our lives because of the current advancement in computer mediated communication (CMC). Despite the current advancement in CMC especially video teleconferencing; it is still far from face-to-face (FtF) interaction. This study will focus on the advancements in video teleconferencing; their trends and challenges. Furthermore, this work will present an overview of previously developed hardware and software techniques to improve the video teleconferencing experience. After discussing the background development of video teleconferencing, we will propose an intuitive solution to improve the video teleconferencing experience. To support the proposed solution, the Embodied Interaction based distance communication framework is developed. The effectiveness of this framework is validated by the user studies. To summarize this work has considered the following questions: What are the factors which make video teleconferencing different from face to face interaction?; What researchers have done so far to improve video teleconferencing?; How to further improve the teleconferencing experience?; How to add more non-verbal modalities to enhance the video teleconferencing experience? At the end we have also provided the future directions for embodied interaction based video teleconferencing.

  • 9.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Expressive Multimedia: Bringing Action to Physical World by Dancing-Tablet2015In: Proceedings of the 2nd Workshop on Computational Models of Social Interactions: Human-Computer-Media Communication, ACM Digital Library, 2015, p. 9-14Conference paper (Refereed)
    Abstract [en]

    The design practice based on embodied interaction concept focuses on developing new user interfaces for computer devices that merge the digital content with the physical world. In this work we have proposed a novel embodied interaction based design in which the 'action' information of the digital content is presented in the physical world. More specifically, we have mapped the 'action' information of the video content from the digital world into the physical world. The motivating example presented in this paper is our novel dancing-tablet, in which a tablet-PC dances on the rhythm of the song, hence the 'action' information is not just confined into a 2D flat display but also expressed by it. This paper presents i) hardware design of our mechatronic dancing-tablet platform, ii) software algorithm for musical feature extraction and iii) embodied computational model for mapping 'action' information of the musical expression to the mechatronic platform. Our user study shows that the overall perception of audio-video music is enhanced by our dancing-tablet setup.

  • 10.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    La Hera, Pedro
    Liu, Feng
    Li, Haibo
    A pilot user's prospective in mobile robotic telepresence system2014In: 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014), IEEE, 2014Conference paper (Refereed)
    Abstract [en]

    In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.

  • 11.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Lu, Zhihan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Royal Institute of Technology (KTH), Stockholm, Sweden.
    Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera2013In: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013, 2013, p. 149-153Conference paper (Other academic)
    Abstract [en]

    In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).

  • 12.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Lu, Zhihan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Tele-embodied agent (TEA) for video teleconferencing2013In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia 2013, Luleå, Sweden, New York, 2013Conference paper (Refereed)
    Abstract [en]

    We propose a design of teleconference system which expressnonverbal behavior (in our case head gesture) along withaudio-video communication. Previous audio-video confer-encing systems are abortive in presenting nonverbal behav-iors which we, as human, usually use in face to face in-teraction. Recently, research in teleconferencing systemshas expanded to include nonverbal cues of remote person intheir distance communication. The accurate representationof non-verbal gestures for such systems is still challengingbecause they are dependent on hand-operated devices (likemouse or keyboard). Furthermore, they still lack in present-ing accurate human gestures. We believe that incorporatingembodied interaction in video teleconferencing, (i.e., usingthe physical world as a medium for interacting with digi-tal technology) can result in nonverbal behavior represen-tation. The experimental platform named Tele-EmbodiedAgent (TEA) is introduced which incorperates remote per-son’s head gestures to study new paradigm of embodied in-teraction in video teleconferencing. Our preliminary testshows accuracy (with respect to pose angles) and efficiency(with respect to time) of our proposed design. TEA canbe used in medical field, factories, offices, gaming industry,music industry and for training.

  • 13.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. University of East London, United Kingdom.
    Mi, Yongcui
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Naeem, Usman
    University of East London, United Kingdom.
    Beskow, Jonas
    The Royal Institute of Technology (KTH), Stockholm, Sweden.
    Li, Haibo
    The Royal Institute of Technology (KTH), Stockholm, Sweden.
    Moveable facial features in a Social Mediator2017In: Intelligent Virtual Agents: IVA 2017 / [ed] Beskow J., Peters C., Castellano G., O'Sullivan C., Leite I., Kopp S., Springer London, 2017, p. 205-208Conference paper (Refereed)
    Abstract [en]

    A brief display of facial features based behavior has a majorimpact on personality perception in human-human communications.Creating such personality traits and representations in a social robot isa challenging task. In this paper, we propose an approach for a roboticface presentation based on moveable 2D facial features and present acomparative study when a synthesized face is projected using three setups;1) 3D mask, 2) 2D screen, and 3) our 2D moveable facial featurebased visualization. We found that robot’s personality and character ishighly influenced by the projected face quality as well as the motion offacial features.

  • 14.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. University of East London, London, England.
    Söderström, Ulrik
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Halawani, Alaa
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Face-off: a Face Reconstruction Technique for Virtual Reality (VR) Scenarios2016In: Computer Vision: ECCV 2016 Workshops / [ed] Hua G., Jégou H., Springer, 2016, Vol. 9913, p. 490-503Conference paper (Refereed)
    Abstract [en]

    Virtual Reality (VR) headsets occlude a significant portion of human face. The real human face is required in many VR applications, for example, video teleconferencing. This paper proposes a wearable camera setup-based solution to reconstruct the real face of a person wearing VR headset. Our solution lies in the core of asymmetrical principal component analysis (aPCA). A user-specific training model is built using aPCA with full face, lips and eye region information. During testing phase, lower face region and partial eye information is used to reconstruct the wearer face. Online testing session consists of two phases, i) calibration phase and ii) reconstruction phase. In former, a small calibration step is performed to align test information with training data, while the later uses half face information to reconstruct the full face using aPCA-based trained-data. The proposed approach is validated with qualitative and quantitative analysis.

  • 15.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Halawani, Alaa
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Khan, Muhammad Sikandar Lal
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Rehman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Finger in air: touch-less interaction on smartphone2013In: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia2013, Luleå, Sweden, luleå, sweden, 2013Conference paper (Refereed)
    Abstract [en]

    In this paper we present a vision based intuitive interactionmethod for smart mobile devices. It is based on markerlessfinger gesture detection which attempts to provide a ‘naturaluser interface’. There is no additional hardware necessaryfor real-time finger gesture estimation. To evaluate thestrengths and effectiveness of proposed method, we designtwo smart phone applications, namely circle menu application- provides user with graphics and smart phone’s statusinformation, and bouncing ball game- a finger gesture basedbouncing ball application. The users interact with these applicationsusing finger gestures through the smart phone’scamera view, which trigger the interaction event and generateactivity sequences for interactive buffers. Our preliminaryuser study evaluation demonstrates effectiveness andthe social acceptability of proposed interaction approach.

  • 16.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Sikandar Lal Khan, Muhammad
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Hand and Foot Gesture Interaction for Handheld Devices2013In: MM '13 Proceedings of the 21st ACM international conference on Multimedia, New York, NY, USA: ACM , 2013, p. 621-624Conference paper (Refereed)
    Abstract [en]

    In this paper we present hand and foot based immersive multimodal interaction approach for handheld devices. A smart phone based immersive football game is designed as a proof of concept. Our proposed method combines input modalities (i.e. hand & foot) and provides a coordinated output to both modalities along with audio and video. In this work, human foot gesture is detected and tracked using template matching method and Tracking-Learning-Detection (TLD) framework. We evaluated our system's usability through a user study in which we asked participants to evaluate proposed interaction method. Our preliminary evaluation demonstrates the efficiency and ease of use of proposed multimodal interaction approach.

  • 17.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Khan, Muhammad Sikandar Lal
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Royal Institute of Technology (KTH), Sweden..
    Anaglyph 3D stereoscopic visualization of 2D video based on fundamental matrix2013In: Proceedings of 2013 International Conferenceon Virtual Reality and Visualization (ICVRV 2013), 14-15 September 2013, Xi'an, Shaanxi, China, 2013Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a simple Anaglyph 3Dstereo generation algorithm from 2D video sequence with monocularcamera. In our novel approach we employ camera poseestimation method to directly generate stereoscopic 3D from 2Dvideo without building depth map explicitly. Our cost effectivemethod is suitable for arbitrary real-world video sequence andproduces smooth results. We use image stitching based on planecorrespondence using fundamental matrix. To this end we alsodemonstrate that correspondence plane image stitching based onHomography matrix only cannot generate better result. Furthermore,we utilize the structure from motion (with fundamentalmatrix) based reconstructed camera pose model to accomplishvisual anaglyph 3D illusion. The proposed approach demonstratesa very good performance for most of the video sequences.

  • 18.
    LV, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Feng, Shengzhong
    Chinese Academy of Science, China.
    Khan, Muhammad Sikandar Lal
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Ur Rehman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Foot motion sensing: augmented game interface based on foot interaction for smartphone2014In: CHI EA '14 CHI '14 Extended Abstracts on Human Factors in Computing Systems, ACM, 2014, p. 293-296Conference paper (Refereed)
    Abstract [en]

    We designed and developmented two games: real-timeaugmented football game and augmented foot pianogame to demonstrate a innovative interface based onfoot motion sensing approach for smart phone. In theproposed novel interface, the computer vision basedhybrid detection and tracking method provides a coresupport for foot interaction interface by accuratelytracking the shoes. Based on the proposed interactioninterface, wo demonstrations are developed, theapplications employ augmented reality technology torender the game graphics and game status informationon smart phones screen. The players interact with thegame using foot interaction toward the rear camera,which triggers the interaction event. This interfacesupports basic foot motion sensing (i.e. direction ofmovement, velocity, rhythm).

  • 19.
    ur Réhman, Shafiq
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Khan, Muhammad Sikandar Lal
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Liu, Li
    Nanjing University of Posts and Telecommunications, Nanjing, China.
    Li, Haibo
    Media technology and interaction design, Royal Institute of Technology (KTH), Sweden; Nanjing University of Posts and Telecommunications, Nanjing, China.
    Vibrotactile TV for immersive experience2014In: Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific, 2014Conference paper (Refereed)
    Abstract [en]

    Audio and video are two powerful media forms to shorten the distance between audience/viewer and actors or players in the TV and films. The recent research shows that today people are using more and more multimedia contents on mobile devices, such as tablets and smartphones. Therefore, an important question emerges - how can we render high-quality, personal immersive experiences to consumers on these systems? To give audience an immersive engagement that differs from `watching a play', we have designed a study to render complete immersive media which include the `emotional information' based on augmented vibrotactile-coding on the back of the user along with audio-video signal. The reported emotional responses to videos viewed with and without haptic enhancement, show that participants exhibited an increased emotional response to media with haptic enhancement. Overall, these studies suggest that the effectiveness of our approach and using a multisensory approach increase immersion and user satisfaction.

1 - 19 of 19
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf