umu.sePublications
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Presence through actions: theories, concepts, and implementations
Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. (immersive interaction Lab (i2lab))ORCID iD: 0000-0002-3037-4244
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

During face-to-face meetings, humans use multimodal information, including verbal information, visual information, body language, facial expressions, and other non-verbal gestures. In contrast, during computer-mediated-communication (CMC), humans rely either on mono-modal information such as text-only, voice-only, or video-only or on bi-modal information by using audiovisual modalities such as video teleconferencing. Psychologically, the difference between the two lies in the level of the subjective experience of presence, where people perceive a reduced feeling of presence in the case of CMC. Despite the current advancements in CMC, it is still far from face-to-face communication, especially in terms of the experience of presence.

This thesis aims to introduce new concepts, theories, and technologies for presence design where the core is actions for creating presence. Thus, the contribution of the thesis can be divided into a technical contribution and a knowledge contribution. Technically, this thesis details novel technologies for improving presence experience during mediated communication (video teleconferencing). The proposed technologies include action robots (including a telepresence mechatronic robot (TEBoT) and a face robot), embodied control techniques (head orientation modeling and virtual reality headset based collaboration), and face reconstruction/retrieval algorithms. The introduced technologies enable action possibilities and embodied interactions that improve the presence experience between the distantly located participants. The novel setups were put into real experimental scenarios, and the well-known social, spatial, and gaze related problems were analyzed.

The developed technologies and the results of the experiments led to the knowledge contribution of this thesis. In terms of knowledge contribution, this thesis presents a more general theoretical conceptual framework for mediated communication technologies. This conceptual framework can guide telepresence researchers toward the development of appropriate technologies for mediated communication applications. Furthermore, this thesis also presents a novel strong concept – presence through actions - that brings in philosophical understandings for developing presence- related technologies. The strong concept - presence through actions is an intermediate-level knowledge that proposes a new way of creating and developing future 'presence artifacts'. Presence- through actions is an action-oriented phenomenological approach to presence that differs from traditional immersive presence approaches that are based (implicitly) on rationalist, internalist views.

Place, publisher, year, edition, pages
Umeå: Umeå universitet , 2017. , 172 p.
Series
Digital Media Lab, ISSN 1652-6295 ; 22
Keyword [en]
Presence, Immersion, Computer mediated communication, Strong concept, Phenomenology, Philosophy, Biologically inspired system, Neck robot, Head pose estimation, Embodied interaction, Virtual reality headset, Social presence, Spatial presence, Face reconstruction/retrieval, Telepresence system, Quality of interaction, Embodied telepresence system, Mona-Lisa gaze effect, eye-contact
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Computer Systems Communication Systems Signal Processing
Research subject
design; Computer Science; Computer Systems; Computerized Image Analysis
Identifiers
URN: urn:nbn:se:umu:diva-138280ISBN: 978-91-7601-730-2 (print)OAI: oai:DiVA.org:umu-138280DiVA: diva2:1133676
Public defence
2017-10-10, Triple Helix, Samverkanshuset, Umeå, 09:00 (English)
Opponent
Supervisors
Available from: 2017-08-21 Created: 2017-08-16 Last updated: 2017-09-26Bibliographically approved
List of papers
1. Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system
Open this publication in new window or tab >>Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system
2016 (English)In: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967, Vol. 31, no 5, 2597-2610 p.Article in journal (Refereed) Published
Abstract [en]

Socially interactive systems are embodied agents that engage in social interactions with humans. From a design perspective, these systems are built by considering a biologically inspired design (Bio-inspired) that can mimic and simulate human-like communication cues and gestures. The design of a bio-inspired system usually consists of (i) studying biological characteristics, (ii) designing a similar biological robot, and (iii) motion planning, that can mimic the biological counterpart. In this article, we present a design, development, control-strategy and verification of our socially interactive bio-inspired robot, namely - Telepresence Mechatronic Robot (TEBoT). The key contribution of our work is an embodiment of a real human-neck movements by, i) designing a mechatronic platform based on the dynamics of a real human neck and ii) capturing the real head movements through our novel single-camera based vision algorithm. Our socially interactive bio-inspired system is based on an intuitive integration-design strategy that combines computer vision based geometric head pose estimation algorithm, model based design (MBD) approach and real-time motion planning techniques. We have conducted an extensive testing to demonstrate effectiveness and robustness of our proposed system.

Keyword
Socially interactive robot, biologically inspired robot, head pose estimation, vision based robot control, model based design, embodied telepresence system
National Category
Robotics Computer Vision and Robotics (Autonomous Systems) Interaction Technologies
Identifiers
urn:nbn:se:umu:diva-108552 (URN)10.3233/JIFS-169100 (DOI)000386532000015 ()
Available from: 2015-09-14 Created: 2015-09-14 Last updated: 2017-08-16Bibliographically approved
2. Gaze perception and awareness in smart devices
Open this publication in new window or tab >>Gaze perception and awareness in smart devices
2016 (English)In: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 92-93, 55-65 p.Article in journal (Refereed) Published
Abstract [en]

Eye contact and gaze awareness play a significant role for conveying emotions and intentions duringface-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However,the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the(i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen isunable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers haveproposed different hardware setups with complex software algorithms. The most recent solution foraccurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, todaycommonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is aneed to improve gaze awareness/perception in these smart devices. In this work, we have revisited thequestion; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis isthat ‘an accurate gaze perception can be achieved by the ‘3D embodiment’ of a remote user's head gestureduring video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3Dembodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smartdevice (tablet PC). The electromechanical platform in combination with a smart device is a novel setupthat is used for studying gaze awareness/perception in 2D screen-based smart devices during videoteleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-LisaGaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii)‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between theobserving person and the object by an actor. Our results confirm that the 3D embodiment of a remoteuser head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness,hence, accurately projecting the human gaze in distant space.

Place, publisher, year, edition, pages
Elsevier, 2016
Keyword
Mona-Lisa gaze effect, gaze awareness, computer-mediated communication, eye contact, head gesture, gaze faithfulness, embodied telepresence system, tablet PC, HCI
National Category
Interaction Technologies Robotics Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-108568 (URN)10.1016/j.ijhcs.2016.05.002 (DOI)000379367900005 ()
Available from: 2015-09-14 Created: 2015-09-14 Last updated: 2017-10-04Bibliographically approved
3. Action Augmented Real Virtuality Design for Presence
Open this publication in new window or tab >>Action Augmented Real Virtuality Design for Presence
2017 (English)In: Article in journal (Refereed) In press
Abstract [en]

This article addresses an important question of howto design a video teleconferencing setup to increase the experienceof spatial and social presence. The traditional video teleconferencingsetups are abortive in presenting nonverbal behaviors ashumans express in face to face communication, which results inlack of presence. In order to address this issue, in this article,we first present a conceptual framework of presence for videoteleconferencing.We introduce a modern presence concept namedreal virtuality and propose a new way of achieving real virtuality.Our new way is based on bodily or artifact actions to increase thepresence and we named this concept presence through actions.Using this new concept, we present a design of a novel actionaugmentedreal virtuality prototype which consider the challengesrelated to design of an action prototype, action embodimentand face representation. Our action prototype is telepresencemechatronic robot (TEBoT) and action embodiment is done byhead mounted display (HMD). The face representation solvesthe problem of face occlusion introduced by HMD. The novelcombination of HMD, TEBoT and face representation algorithmis put into real video teleconferencing scenario to see howsuccessful is such a system in solving challenges related to spatialand social presence. We have performed a user study where theinvited participants were requested to experience our novel setupand compare it with the traditional video teleconferencing setup.The results show that the action capabilities not only increasethe spatial presence but also increase the social presence of aremote person among local collaborators.

Keyword
Real virtuality, Virtual reality, Embodiment, Telepresence, Actions, Perception, Embodied telepresence system, Webrtc, Face occlusion, Face retrieval.
National Category
Signal Processing Computer Systems
Research subject
Computing Science; Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-138278 (URN)
Available from: 2017-08-16 Created: 2017-08-16 Last updated: 2017-08-16
4. Moveable facial features in a Social Mediator
Open this publication in new window or tab >>Moveable facial features in a Social Mediator
2017 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

A brief display of facial features based behavior has a majorimpact on personality perception in human-human communications.Creating such personality traits and representations in a social robot isa challenging task. In this paper, we propose an approach for a roboticface presentation based on moveable 2D facial features and present acomparative study when a synthesized face is projected using three setups;1) 3D mask, 2) 2D screen, and 3) our 2D moveable facial featurebased visualization. We found that robot’s personality and character ishighly influenced by the projected face quality as well as the motion offacial features.

Keyword
Social robots, telepresence system, facial features, feature tracking, face robot
National Category
Engineering and Technology Computer Systems Signal Processing
Research subject
Computer Science; design
Identifiers
urn:nbn:se:umu:diva-138276 (URN)
Conference
17th International Conference on Intelligent Virtual Agents (IVA 2017)
Available from: 2017-08-16 Created: 2017-08-16 Last updated: 2017-08-16
5. Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera
Open this publication in new window or tab >>Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera
2013 (English)In: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013, 2013, 149-153 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).

Keyword
Head pose estimation, 3D geometric modeling, human motion analysis
National Category
Signal Processing
Research subject
Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-82187 (URN)10.12792/icisip2013.031 (DOI)
Conference
The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013
Available from: 2013-10-28 Created: 2013-10-28 Last updated: 2017-08-16
6. Tele-Immersion: Virtual Reality based Collaboration
Open this publication in new window or tab >>Tele-Immersion: Virtual Reality based Collaboration
2016 (English)In: HCI International 2016: Posters' Extended Abstracts : 18th International Conference, HCI International 2016, Toronto, Canada, July 17-22, 2016, Proceedings, Part I / [ed] Constantine Stephanidis, Springer, 2016, 352-357 p.Conference paper, Published paper (Refereed)
Abstract [en]

The 'perception of being present in another space' duringvideo teleconferencing is a challenging task. This work makes an effortto improve upon a user perception of being 'present' in another space byemploying a virtual reality (VR) headset and an embodied telepresencesystem (ETS). In our application scenario, a remote participant usesa VR headset to collaborate with local collaborators. At a local site,an ETS is used as a physical representation of the remote participantamong his/her local collaborators. The head movements of the remoteperson is mapped and presented by the ETS along with audio-video com-munication. Key considerations of complete design are discussed, wheresolutions to challenges related to head tracking, audio-video communi-cation and data communication are presented. The proposed approachis validated by the user study where quantitative analysis is done onimmersion and presence parameters.

Place, publisher, year, edition, pages
Springer, 2016
Series
Communications in Computer and Information Science, ISSN 1865-0929 ; 617
Keyword
Tele-immersion, Virtual reality, Embodied telepresence system, presence, distal attribution, spatial cognition
National Category
Human Computer Interaction Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-120461 (URN)10.1007/978-3-319-40548-3_59 (DOI)000389727300059 ()978-3-319-40547-6 (ISBN)978-3-319-40548-3 (ISBN)
Conference
18th International Conference on Human-Computer Interaction (HCI International), Toronto, July 17-22, 2016
Available from: 2016-05-16 Created: 2016-05-16 Last updated: 2017-08-16Bibliographically approved
7. Face-off: a Face ReconstructionTechnique for Virtual Reality(VR) Scenarios
Open this publication in new window or tab >>Face-off: a Face ReconstructionTechnique for Virtual Reality(VR) Scenarios
Show others...
2016 (English)In: The 1st Int. Workshop on Egocentric, perception, interaction andcomputing,, 2016Conference paper, Oral presentation only (Refereed)
Abstract [en]

Virtual Reality (VR) headsets occlude a significant portionof human face. The real human face is required in many VR applications,for example, video teleconferencing. This paper proposes a wearable camerasetup-based solution to reconstruct the real face of a person wearingVR headset. Our solution lies in the core of asymmetrical principal componentanalysis (aPCA). A user-specific training model is built usingaPCA with full face, lips and eye region information. During testingphase, lower face region and partial eye information is used to reconstructthe wearer face. Online testing session consists of two phases, i)calibration phase and ii) reconstruction phase. In former, a small calibrationstep is performed to align test information with training data,while the later uses half face information to reconstruct the full face usingaPCA-based trained-data. The proposed approach is validated withqualitative and quantitative analysis.

Keyword
Virtual Reality, VR headset, Face reconstruction, PCA, wearable setup, Oculus.
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Computer Systems Signal Processing
Research subject
Computer Science; Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-138277 (URN)10.1007/978-3-319-46604-0_35 (DOI)
Conference
European Conference on Computer Vision
Available from: 2017-08-16 Created: 2017-08-16 Last updated: 2017-08-16
8. Distance Communication: Trends and Challenges and How to Resolve them
Open this publication in new window or tab >>Distance Communication: Trends and Challenges and How to Resolve them
2014 (English)In: Strategies for a creative future with computer science, quality design and communicability / [ed] Francisco V. C. Ficarra, Kim Veltman, Kaoru Sumi, Jacqueline Alma, Mary Brie, Miguel C. Ficarra, Domen Verber, Bojan Novak, and Andreas Kratky, Italy: Blue Herons Editions , 2014Chapter in book (Refereed)
Abstract [en]

Distance communication is becoming an important part of our lives because of the current advancement in computer mediated communication (CMC). Despite the current advancement in CMC especially video teleconferencing; it is still far from face-to-face (FtF) interaction. This study will focus on the advancements in video teleconferencing; their trends and challenges. Furthermore, this work will present an overview of previously developed hardware and software techniques to improve the video teleconferencing experience. After discussing the background development of video teleconferencing, we will propose an intuitive solution to improve the video teleconferencing experience. To support the proposed solution, the Embodied Interaction based distance communication framework is developed. The effectiveness of this framework is validated by the user studies. To summarize this work has considered the following questions: What are the factors which make video teleconferencing different from face to face interaction?; What researchers have done so far to improve video teleconferencing?; How to further improve the teleconferencing experience?; How to add more non-verbal modalities to enhance the video teleconferencing experience? At the end we have also provided the future directions for embodied interaction based video teleconferencing.

Place, publisher, year, edition, pages
Italy: Blue Herons Editions, 2014
Keyword
Video Teleconferencing, Embodied Interaction, HRI, HCI, Nonverbal Communication, Anthropomorphic Design, Embodied Telepresence System.
National Category
Interaction Technologies Media Engineering Human Computer Interaction Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-120307 (URN)978-88-96471-10-4 (ISBN)
Available from: 2016-05-15 Created: 2016-05-15 Last updated: 2017-08-16Bibliographically approved

Open Access in DiVA

fulltext(97926 kB)67 downloads
File information
File name FULLTEXT01.pdfFile size 97926 kBChecksum SHA-512
4bb07ef33b5f85e905d42fe6501746f28323955120d7ab8a8d604861b186239918495784c1a07fb8c28007a3a24890ffada34b807a655ec9e6627dcee6bf979b
Type fulltextMimetype application/pdf
spikblad(210 kB)3 downloads
File information
File name SPIKBLAD02.pdfFile size 210 kBChecksum SHA-512
93c213a42525e6fa79cbc283f5d0f314e6f931dd53052bcf02e71ef3882d91788dc73f142f18c5fcb2fb654f1220377210d4551479a5e74f42f271916501f43f
Type spikbladMimetype application/pdf
omslag(428 kB)0 downloads
File information
File name PREVIEW01.pdfFile size 428 kBChecksum SHA-512
914fb130b17c68a7d7de9e23eda2d79bcb750e654e00473fed55092ee5a025f2ef856d5a9024e6925a4d02736134068fc63c2169e7e245730e45159df9d7c3ec
Type coverMimetype application/pdf

Search in DiVA

By author/editor
Khan, Muhammad Sikandar Lal
By organisation
Department of Applied Physics and Electronics
Electrical Engineering, Electronic Engineering, Information EngineeringComputer SystemsCommunication SystemsSignal Processing

Search outside of DiVA

GoogleGoogle Scholar
Total: 67 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 813 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf