umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Enabling physical action in computer mediated communication: an embodied interaction approach
Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. (immersive interation lab (i2lab))
2015 (English)Licentiate thesis, comprehensive summary (Other academic)
Place, publisher, year, edition, pages
Umeå: Department of Applied Physics and Electronics, Umeå University , 2015. , 40 p.
Series
Digital Media Lab, ISSN 1652-6295 ; 20
Keyword [en]
biologically inspired system, parallel robot, neck robot, head pose estimation, embodied interaction, telepresence system, quality of interaction, embodied telepresence system, Mona-Lisa gaze effect, eye-contact
National Category
Communication Systems Interaction Technologies
Identifiers
URN: urn:nbn:se:umu:diva-108569ISBN: 978-91-7601-321-2 (print)OAI: oai:DiVA.org:umu-108569DiVA: diva2:853612
Supervisors
Available from: 2015-09-15 Created: 2015-09-14 Last updated: 2015-09-15Bibliographically approved
List of papers
1. Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system
Open this publication in new window or tab >>Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system
2016 (English)In: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967, Vol. 31, no 5, 2597-2610 p.Article in journal (Refereed) Published
Abstract [en]

Socially interactive systems are embodied agents that engage in social interactions with humans. From a design perspective, these systems are built by considering a biologically inspired design (Bio-inspired) that can mimic and simulate human-like communication cues and gestures. The design of a bio-inspired system usually consists of (i) studying biological characteristics, (ii) designing a similar biological robot, and (iii) motion planning, that can mimic the biological counterpart. In this article, we present a design, development, control-strategy and verification of our socially interactive bio-inspired robot, namely - Telepresence Mechatronic Robot (TEBoT). The key contribution of our work is an embodiment of a real human-neck movements by, i) designing a mechatronic platform based on the dynamics of a real human neck and ii) capturing the real head movements through our novel single-camera based vision algorithm. Our socially interactive bio-inspired system is based on an intuitive integration-design strategy that combines computer vision based geometric head pose estimation algorithm, model based design (MBD) approach and real-time motion planning techniques. We have conducted an extensive testing to demonstrate effectiveness and robustness of our proposed system.

Keyword
Socially interactive robot, biologically inspired robot, head pose estimation, vision based robot control, model based design, embodied telepresence system
National Category
Robotics Computer Vision and Robotics (Autonomous Systems) Interaction Technologies
Identifiers
urn:nbn:se:umu:diva-108552 (URN)10.3233/JIFS-169100 (DOI)000386532000015 ()
Available from: 2015-09-14 Created: 2015-09-14 Last updated: 2017-12-04Bibliographically approved
2. Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera
Open this publication in new window or tab >>Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera
2013 (English)In: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013, 2013, 149-153 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).

Keyword
Head pose estimation, 3D geometric modeling, human motion analysis
National Category
Signal Processing
Research subject
Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-82187 (URN)10.12792/icisip2013.031 (DOI)
Conference
The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013
Available from: 2013-10-28 Created: 2013-10-28 Last updated: 2017-08-16
3. Embodied tele-presence system (ETS): designing tele-presence for video teleconferencing
Open this publication in new window or tab >>Embodied tele-presence system (ETS): designing tele-presence for video teleconferencing
2014 (English)In: Design, user experience, and usability: User experience design for diverse interaction platforms and environments / [ed] Aaron Marcus, Springer International Publishing Switzerland, 2014, Vol. 8518, 574-585 p.Conference paper, Published paper (Refereed)
Abstract [en]

In spite of the progress made in tele conferencing over the last decades, however, it is still far from a resolved issue. In this work, we present an intuitive video teleconferencing system, namely - Embodied Tele-Presence System (ETS) which is based on embodied interaction concept. This work proposes the results of a user study considering the hypothesis: “ Embodied interaction based video conferencing system performs better than the standard video conferencing system in representing nonverbal behaviors, thus creating a ‘feeling of presence’ of a remote person among his/her local collaborators”. Our ETS integrates standard audio-video conferencing with mechanical embodiment of head gestures of a remote person (as nonverbal behavior) to enhance the level of interaction. To highlight the technical challenges and design principles behind such tele-presence systems, we have also performed a system evaluation which shows the accuracy and efficiency of our ETS design. The paper further provides an overview of our case study and an analysis of our user evaluation. The user study shows that the proposed embodied interaction approach in video teleconferencing increases ‘in-meeting interaction’ and enhance a ‘feeling of presence’ among remote participant and his collaborators.

Place, publisher, year, edition, pages
Springer International Publishing Switzerland: , 2014
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8518
Keyword
Embodied Interaction, Multimodal Interaction, HCI, Audio-Video Conferencing, Head Gesture, Tele-Presence
National Category
Interaction Technologies Robotics Media Engineering Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-91017 (URN)10.1007/978-3-319-07626-3_54 (DOI)000342846900054 ()978-3-319-07625-6 (ISBN)978-3-319-07626-3 (ISBN)
Conference
3rd International Conference on Design, User Experience, and Usability (DUXU), JUN 22-27, 2014, Heraklion, GREECE
Available from: 2014-07-07 Created: 2014-07-07 Last updated: 2017-01-16Bibliographically approved
4. A pilot user's perspective in mobile robotic telepresence system
Open this publication in new window or tab >>A pilot user's perspective in mobile robotic telepresence system
Show others...
2014 (English)In: 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014): proceedings, IEEE conference proceedings, 2014, 1-4 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.

Place, publisher, year, edition, pages
IEEE conference proceedings, 2014
National Category
Robotics Interaction Technologies Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-108559 (URN)10.1109/APSIPA.2014.7041635 (DOI)9781479942503 (ISBN)
Conference
Asia-Pacific signal and information processing association 2014 : annual summit and conference, Angkor Wat, Cambodia, Dec 9-12, 2014
Available from: 2015-09-14 Created: 2015-09-14 Last updated: 2016-05-15
5. Gaze perception and awareness in smart devices
Open this publication in new window or tab >>Gaze perception and awareness in smart devices
2016 (English)In: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 92-93, 55-65 p.Article in journal (Refereed) Published
Abstract [en]

Eye contact and gaze awareness play a significant role for conveying emotions and intentions duringface-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However,the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the(i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen isunable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers haveproposed different hardware setups with complex software algorithms. The most recent solution foraccurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, todaycommonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is aneed to improve gaze awareness/perception in these smart devices. In this work, we have revisited thequestion; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis isthat ‘an accurate gaze perception can be achieved by the ‘3D embodiment’ of a remote user's head gestureduring video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3Dembodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smartdevice (tablet PC). The electromechanical platform in combination with a smart device is a novel setupthat is used for studying gaze awareness/perception in 2D screen-based smart devices during videoteleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-LisaGaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii)‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between theobserving person and the object by an actor. Our results confirm that the 3D embodiment of a remoteuser head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness,hence, accurately projecting the human gaze in distant space.

Place, publisher, year, edition, pages
Elsevier, 2016
Keyword
Mona-Lisa gaze effect, gaze awareness, computer-mediated communication, eye contact, head gesture, gaze faithfulness, embodied telepresence system, tablet PC, HCI
National Category
Interaction Technologies Robotics Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-108568 (URN)10.1016/j.ijhcs.2016.05.002 (DOI)000379367900005 ()
Available from: 2015-09-14 Created: 2015-09-14 Last updated: 2017-10-04Bibliographically approved

Open Access in DiVA

fulltext(769 kB)285 downloads
File information
File name FULLTEXT01.pdfFile size 769 kBChecksum SHA-512
35c68ac380014c580de506ec731669b7a1cd982fa2656398fd9c2af2d88ce0ba1fb76b08689cafc9796bcea7160fdd83e861f478475d531a19b87f620c9a6366
Type fulltextMimetype application/pdf

Authority records BETA

Khan, Muhammad Sikandar Lal

Search in DiVA

By author/editor
Khan, Muhammad Sikandar Lal
By organisation
Department of Applied Physics and Electronics
Communication SystemsInteraction Technologies

Search outside of DiVA

GoogleGoogle Scholar
Total: 285 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 378 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf