umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
iFeeling: Vibrotactile rendering of human emotions on mobile phones
Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. (Digital Media Lab.)
Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. (Digital Media Lab.)
2010 (English)In: Mobile multimedia processing: fundamentals, methods, and applications / [ed] Xiaoyi Jiang, MatthewY. Ma, Chang Wen Chen, Heidelberg, Germany: Springer Berlin , 2010, 1st Edition, 1-20 p.Chapter in book (Other academic)
Abstract [en]

Today, the mobile phone technology is mature enough to enable us to effectively interact with mobile phones using our three major senses namely, vision, hearing and touch. Similar to the camera, which adds interest and utility to mobile experience, the vibration motor in a mobile phone could give us a new possibility to improve interactivity and usability of mobile phones. In this chapter, we show that by carefully controlling vibration patterns, more than 1-bit information can be rendered with a vibration motor. We demonstrate how to turn a mobile phone into a social interface for the blind so that they can sense emotional information of others. The technical details are given on how to extract emotional information, design vibrotactile coding schemes, render vibrotactile patterns, as well as how to carry out user tests to evaluate its usability. Experimental studies and users tests have shown that we do get and interpret more than one bit emotional information. This shows a potential to enrich mobile phones communication among the users through the touch channel.

Place, publisher, year, edition, pages
Heidelberg, Germany: Springer Berlin , 2010, 1st Edition. 1-20 p.
Series
Lecture notes in computer science, ISSN 0302-9743 (print), 1611-3349 (online) ; 5960
Keyword [en]
emotion estimation, vibrotactile rendering, lip tracking, mobile communication, tactile coding, mobile phone.
National Category
Signal Processing
Research subject
Computerized Image Analysis
Identifiers
URN: urn:nbn:se:umu:diva-32998DOI: 10.1007/978-3-642-12349-8_1ISBN: 978-3-642-12348-1 (print)OAI: oai:DiVA.org:umu-32998DiVA: diva2:308441
Available from: 2010-04-06 Created: 2010-04-06 Last updated: 2010-04-20Bibliographically approved
In thesis
1. Expressing emotions through vibration for perception and control
Open this publication in new window or tab >>Expressing emotions through vibration for perception and control
2010 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[en]
Expressing emotions through vibration
Abstract [en]

This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’.

It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions.

The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.

Place, publisher, year, edition, pages
Umeå: Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2010. 159 p.
Series
Digital Media Lab, ISSN 1652-6295 ; 12
Keyword
Multimodal Signal Processing, Mobile Communication, Vibrotactile Rendering, Locally Linear Embedding, Object Detection, Human Facial Expression Analysis, Lip Tracking, Object Tracking, HCI, Expectation-Maximization Algorithm, Lipless Tracking, Image Analysis, Visually Impaired.
National Category
Signal Processing Computer Vision and Robotics (Autonomous Systems) Computer Science Telecommunications Information Science
Research subject
Computerized Image Analysis; Computing Science; Electronics; Systems Analysis
Identifiers
urn:nbn:se:umu:diva-32990 (URN)978-91-7264-978-1 (ISBN)
Public defence
2010-04-28, Naturvetarhuset, N300, Umeå universitet, Umeå, Sweden, 09:00 (English)
Opponent
Supervisors
Projects
Taktil Video
Available from: 2010-04-07 Created: 2010-04-06 Last updated: 2010-04-20Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Authority records BETA

ur Réhman, ShafiqLiu, Li

Search in DiVA

By author/editor
ur Réhman, ShafiqLiu, Li
By organisation
Department of Applied Physics and Electronics
Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 416 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf