umu.sePublikationer
Ändra sökning
Avgränsa sökresultatet
12 1 - 50 av 65
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Augustian, Midhumol
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Sandvig, Axel
    Umeå universitet, Medicinska fakulteten, Institutionen för farmakologi och klinisk neurovetenskap. Norwegian University of Science and Technology (NTNU), Norway.
    Kotikawatte, Thivra
    Umeå universitet, Medicinska fakulteten, Institutionen för farmakologi och klinisk neurovetenskap.
    Yongcui, Mi
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Evensmoen, Hallvard Røe
    Norwegian University of Science and Technology (NTNU), Norway.
    EEG Analysis from Motor Imagery to Control a Forestry Crane2018Ingår i: Intelligent Human Systems Integration (IHSI 2018) / [ed] Karwowski, Waldemar, Ahram, Tareq, 2018, Vol. 722, s. 281-286Konferensbidrag (Refereegranskat)
    Abstract [en]

    Brain-computer interface (BCI) systems can provide people with ability to communicate and control real world systems using neural activities. Therefore, it makes sense to develop an assistive framework for command and control of a future robotic system which can assist the human robot collaboration. In this paper, we have employed electroencephalographic (EEG) signals recorded by electrodes placed over the scalp. The human-hand movement based motor imagery mentalization is used to collect brain signals over the motor cortex area. The collected µ-wave (8–13 Hz) EEG signals were analyzed with event-related desynchronization/synchronization (ERD/ERS) quantification to extract a threshold between hand grip and release movement and this information can be used to control forestry crane grasping and release functionality. The experiment was performed with four healthy persons to demonstrate the proof-of concept BCI system. From this study, it is demonstrated that the proposed method has potential to assist the manual operation of crane operators performing advanced task with heavy cognitive work load.

  • 2.
    Ehatisham-ul-Haq, Muhammad
    et al.
    Faculty of Telecom and Information Engineering, University of Engineering and Technology, Taxila, Punjab, Pakistan.
    Awais Azam, Muhammad
    Faculty of Telecom and Information Engineering, University of Engineering and Technology, Taxila, Punjab, Pakistan.
    Naeem, Usman
    School of Architecture, Computing and Engineering, University of East London, United Kingdom.
    Ur Rèhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Khaild, Asra
    Department of Computer Science, COMSATS Institute of Information Technology, Wah Campus, Pakistan.
    Identifying smartphone users based on their activity patterns via mobile sensing2017Ingår i: Procedia Computer Science, ISSN 1877-0509, E-ISSN 1877-0509, Vol. 113, s. 202-209Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Smartphones are ubiquitous devices that enable users to perform many of their routine tasks anytime and anywhere. With the advancement in information technology, smartphones are now equipped with sensing and networking capabilities that provide context-awareness for a wide range of applications. Due to ease of use and access, many users are using smartphones to store their private data, such as personal identifiers and bank account details. This type of sensitive data can be vulnerable if the device gets lost or stolen. The existing methods for securing mobile devices, including passwords, PINs and pattern locks are susceptible to many bouts such as smudge attacks. This paper proposes a novel framework to protect sensitive data on smartphones by identifying smartphone users based on their behavioral traits using smartphone embedded sensors. A series of experiments have been conducted for validating the proposed framework, which demonstrate its effectiveness.

  • 3.
    Fahlquist, Karin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Karlsson, Johannes
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Ren, Keni
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Liu, Li
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Ur-Rehman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Wark, Tim
    CSIRO.
    Human animal machine interaction: Animal behavior awareness and digital experience2010Ingår i: Proceedings of ACM Multimedia 2010 - Brave New Ideas, 25-29 October 2010, Firenze, Italy., 2010, s. 1269-1274Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper proposes an intuitive wireless sensor/actuator based communication network for human animal interaction for a digital zoo. In order to enhance effective observation and control over wild life, we have built a wireless sensor network. 25 video transmitting nodes are installed for animal behavior observation and experimental vibrotactile collars have been designed for effective control in an animal park.

    The goal of our research is two-folded. Firstly, to provide an interaction between digital users and animals, and monitor the animal behavior for safety purposes. Secondly, we investigate how animals can be controlled or trained based on vibrotactile stimuli instead of electric stimuli.

    We have designed a multimedia sensor network for human animal machine interaction. We have evaluated the effect of human animal machine state communication model in field experiments.

  • 4.
    Halawani, Alaa
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Active Vision for Tremor Disease Monitoring2015Ingår i: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences AHFE 2015, 2015, Vol. 3, s. 2042-2048Konferensbidrag (Refereegranskat)
    Abstract [en]

    The aim of this work is to introduce a prototype for monitoring tremor diseases using computer vision techniques.  While vision has been previously used for this purpose, the system we are introducing differs intrinsically from other traditional systems. The essential difference is characterized by the placement of the camera on the user’s body rather than in front of it, and thus reversing the whole process of motion estimation. This is called active motion tracking. Active vision is simpler in setup and achieves more accurate results compared to traditional arrangements, which we refer to as “passive” here. One main advantage of active tracking is its ability to detect even tiny motions using its simple setup, and that makes it very suitable for monitoring tremor disorders. 

  • 5.
    Halawani, Alaa
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Anani, Adi
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Active vision for controlling an electric wheelchair2012Ingår i: Intelligent Service Robotics, ISSN 1861-2776, Vol. 5, nr 2, s. 89-98Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Most of the electric wheelchairs available in the market are joystick-driven and therefore assume that the user is able to use his hand motion to steer the wheelchair. This does not apply to many users that are only capable of moving the head like quadriplegia patients. This paper presents a vision-based head motion tracking system to enable such patients of controlling the wheelchair. The novel approach that we suggest is to use active vision rather than passive to achieve head motion tracking. In active vision-based tracking, the camera is placed on the user’s head rather than in front of it. This makes tracking easier, more accurate and enhances the resolution. This is demonstrated theoretically and experimentally. The proposed tracking scheme is then used successfully to control our electric wheelchair to navigate in a real world environment.

  • 6.
    Harisubramanyabalaji, Subramani Palanisamy
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Scania CV AB, Södertälje, Sweden.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Nyberg, Mattias
    Gustavsson, Joakim
    Improving Image Classification Robustness Using Predictive Data Augmentation2018Ingår i: Computer Safety, Reliability, and Security: SAFECOMP 2018 / [ed] Gallina B., Skavhaug A., Schoitsch E., Bitsch F., Springer, 2018, s. 548-561Konferensbidrag (Refereegranskat)
    Abstract [en]

    Safer autonomous navigation might be challenging if there is a failure in sensing system. Robust classifier algorithm irrespective of camera position, view angles, and environmental condition of an autonomous vehicle including different size & type (Car, Bus, Truck, etc.) can safely regulate the vehicle control. As training data play a crucial role in robust classification of traffic signs, an effective augmentation technique enriching the model capacity to withstand variations in urban environment is required. In this paper, a framework to identify model weakness and targeted augmentation methodology is presented. Based on off-line behavior identification, exact limitation of a Convolutional Neural Network (CNN) model is estimated to augment only those challenge levels necessary for improved classifier robustness. Predictive Augmentation (PA) and Predictive Multiple Augmentation (PMA) methods are proposed to adapt the model based on acquired challenges with a high numerical value of confidence. We validated our framework on two different training datasets and with 5 generated test groups containing varying levels of challenge (simple to extreme). The results show impressive improvement by 5-20% in overall classification accuracy thereby keeping their high confidence.

  • 7.
    Karlsson, Johannes
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Augmented reality to enhance vistors experience in a digital zoo2010Ingår i: Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia (ACM MUM'10), Limassol, Cyprus, 2010Konferensbidrag (Refereegranskat)
  • 8.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Computer Engineering Department, Palestine Polytechnic University, Hebron 90100, Palestine.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Action Augmented Real Virtuality Design for Presence2018Ingår i: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 10, nr 4, s. 961-972Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper addresses the important question of how to design a video teleconferencing setup to increase the experience of spatial and social presence. Traditional video teleconferencing setups are lacking in presenting the nonverbal behaviors that humans express in face-to-face communication, which results in decrease in presence-experience. In order to address this issue, we first present a conceptual framework of presence for video teleconferencing. We introduce a modern presence concept called real virtuality and propose a new way of achieving this based on body or artifact actions to increase the feeling of presence, and we named this concept presence through actions. Using this new concept, we present the design of a novel action-augmented real virtuality prototype that considers the challenges related to the design of an action prototype, action embodiment, and face representation. Our action prototype is a telepresence mechatronic robot (TEBoT), and action embodiment is through a head-mounted display (HMD). The face representation solves the problem of face occlusion introduced by the HMD. The novel combination of HMD, TEBoT, and face representation algorithm has been tested in a real video teleconferencing scenario for its ability to solve the challenges related to spatial and social presence. We have performed a user study where the invited participants were requested to experience our novel setup and to compare it with a traditional video teleconferencing setup. The results show that the action capabilities not only increase the feeling of spatial presence but also increase the feeling of social presence of a remote person among local collaborators.

  • 9.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    KTH.
    Réhman, Shafiq ur
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Embodied tele-presence system (ETS): designing tele-presence for video teleconferencing2014Ingår i: Design, user experience, and usability: User experience design for diverse interaction platforms and environments / [ed] Aaron Marcus, Springer International Publishing Switzerland, 2014, Vol. 8518, s. 574-585Konferensbidrag (Refereegranskat)
    Abstract [en]

    In spite of the progress made in tele conferencing over the last decades, however, it is still far from a resolved issue. In this work, we present an intuitive video teleconferencing system, namely - Embodied Tele-Presence System (ETS) which is based on embodied interaction concept. This work proposes the results of a user study considering the hypothesis: “ Embodied interaction based video conferencing system performs better than the standard video conferencing system in representing nonverbal behaviors, thus creating a ‘feeling of presence’ of a remote person among his/her local collaborators”. Our ETS integrates standard audio-video conferencing with mechanical embodiment of head gestures of a remote person (as nonverbal behavior) to enhance the level of interaction. To highlight the technical challenges and design principles behind such tele-presence systems, we have also performed a system evaluation which shows the accuracy and efficiency of our ETS design. The paper further provides an overview of our case study and an analysis of our user evaluation. The user study shows that the proposed embodied interaction approach in video teleconferencing increases ‘in-meeting interaction’ and enhance a ‘feeling of presence’ among remote participant and his collaborators.

  • 10.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Gaze perception and awareness in smart devices2016Ingår i: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 92-93, s. 55-65Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Eye contact and gaze awareness play a significant role for conveying emotions and intentions duringface-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However,the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the(i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen isunable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers haveproposed different hardware setups with complex software algorithms. The most recent solution foraccurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, todaycommonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is aneed to improve gaze awareness/perception in these smart devices. In this work, we have revisited thequestion; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis isthat ‘an accurate gaze perception can be achieved by the ‘3D embodiment’ of a remote user's head gestureduring video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3Dembodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smartdevice (tablet PC). The electromechanical platform in combination with a smart device is a novel setupthat is used for studying gaze awareness/perception in 2D screen-based smart devices during videoteleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-LisaGaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii)‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between theobserving person and the object by an actor. Our results confirm that the 3D embodiment of a remoteuser head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness,hence, accurately projecting the human gaze in distant space.

  • 11.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    KTH Royal Institute of Technology.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Tele-Immersion: Virtual Reality based Collaboration2016Ingår i: HCI International 2016: Posters' Extended Abstracts : 18th International Conference, HCI International 2016, Toronto, Canada, July 17-22, 2016, Proceedings, Part I / [ed] Constantine Stephanidis, Springer, 2016, s. 352-357Konferensbidrag (Refereegranskat)
    Abstract [en]

    The 'perception of being present in another space' duringvideo teleconferencing is a challenging task. This work makes an effortto improve upon a user perception of being 'present' in another space byemploying a virtual reality (VR) headset and an embodied telepresencesystem (ETS). In our application scenario, a remote participant usesa VR headset to collaborate with local collaborators. At a local site,an ETS is used as a physical representation of the remote participantamong his/her local collaborators. The head movements of the remoteperson is mapped and presented by the ETS along with audio-video com-munication. Key considerations of complete design are discussed, wheresolutions to challenges related to head tracking, audio-video communi-cation and data communication are presented. The proposed approachis validated by the user study where quantitative analysis is done onimmersion and presence parameters.

  • 12.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system2016Ingår i: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967, Vol. 31, nr 5, s. 2597-2610Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Socially interactive systems are embodied agents that engage in social interactions with humans. From a design perspective, these systems are built by considering a biologically inspired design (Bio-inspired) that can mimic and simulate human-like communication cues and gestures. The design of a bio-inspired system usually consists of (i) studying biological characteristics, (ii) designing a similar biological robot, and (iii) motion planning, that can mimic the biological counterpart. In this article, we present a design, development, control-strategy and verification of our socially interactive bio-inspired robot, namely - Telepresence Mechatronic Robot (TEBoT). The key contribution of our work is an embodiment of a real human-neck movements by, i) designing a mechatronic platform based on the dynamics of a real human neck and ii) capturing the real head movements through our novel single-camera based vision algorithm. Our socially interactive bio-inspired system is based on an intuitive integration-design strategy that combines computer vision based geometric head pose estimation algorithm, model based design (MBD) approach and real-time motion planning techniques. We have conducted an extensive testing to demonstrate effectiveness and robustness of our proposed system.

  • 13.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Distance Communication: Trends and Challenges and How to Resolve them2014Ingår i: Strategies for a creative future with computer science, quality design and communicability / [ed] Francisco V. C. Ficarra, Kim Veltman, Kaoru Sumi, Jacqueline Alma, Mary Brie, Miguel C. Ficarra, Domen Verber, Bojan Novak, and Andreas Kratky, Italy: Blue Herons Editions , 2014Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    Distance communication is becoming an important part of our lives because of the current advancement in computer mediated communication (CMC). Despite the current advancement in CMC especially video teleconferencing; it is still far from face-to-face (FtF) interaction. This study will focus on the advancements in video teleconferencing; their trends and challenges. Furthermore, this work will present an overview of previously developed hardware and software techniques to improve the video teleconferencing experience. After discussing the background development of video teleconferencing, we will propose an intuitive solution to improve the video teleconferencing experience. To support the proposed solution, the Embodied Interaction based distance communication framework is developed. The effectiveness of this framework is validated by the user studies. To summarize this work has considered the following questions: What are the factors which make video teleconferencing different from face to face interaction?; What researchers have done so far to improve video teleconferencing?; How to further improve the teleconferencing experience?; How to add more non-verbal modalities to enhance the video teleconferencing experience? At the end we have also provided the future directions for embodied interaction based video teleconferencing.

  • 14.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Embodied head gesture and distance education2015Ingår i: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, 2015, Vol. 3, s. 2034-2041Konferensbidrag (Refereegranskat)
    Abstract [en]

    Traditional distance education settings are usually based on video teleconferencing scenarios where human emotions and social presence are only expressed by the facial and vocal expressions which are not enough for complete presence; our bodily gestures and actions play a vital role in understanding exact meaning of communication patterns; especially in teaching-learning scenarios. The bodily gestures especially head movements offer cues to understand contextual knowledge during conversational dialogue. In this work, we have considered the tutor’s head gesture embodiment for educational assistive robot and compared the results with the standard audio-video tele-conferencing scenarios used in online education. We have used Embodied Telepresence System (ETS) to investigate the distance communication for online education setting. Our ETS emulates the head gestures of the human tutor for distance education scenario. Our experimental study includes ten able-bodied subjects (5 male and 5 female) from various countries. These participants were asked to participate in online education scenario through i) a traditional video conferencing tool, i.e. Skype and ii) an extended setup based on ETS. The statistical analysis is done on the results which indicates the effectiveness of our novel embodied head gesture based approach in distance education setting. Our experimental studies show that the proposed design of embodied head gesture based ETS is able to improve the user engagement in distance education.

  • 15.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Expressive Multimedia: Bringing Action to Physical World by Dancing-Tablet2015Ingår i: Proceedings of the 2nd Workshop on Computational Models of Social Interactions: Human-Computer-Media Communication, ACM Digital Library, 2015, s. 9-14Konferensbidrag (Refereegranskat)
    Abstract [en]

    The design practice based on embodied interaction concept focuses on developing new user interfaces for computer devices that merge the digital content with the physical world. In this work we have proposed a novel embodied interaction based design in which the 'action' information of the digital content is presented in the physical world. More specifically, we have mapped the 'action' information of the video content from the digital world into the physical world. The motivating example presented in this paper is our novel dancing-tablet, in which a tablet-PC dances on the rhythm of the song, hence the 'action' information is not just confined into a 2D flat display but also expressed by it. This paper presents i) hardware design of our mechatronic dancing-tablet platform, ii) software algorithm for musical feature extraction and iii) embodied computational model for mapping 'action' information of the musical expression to the mechatronic platform. Our user study shows that the overall perception of audio-video music is enhanced by our dancing-tablet setup.

  • 16.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    La Hera, Pedro
    Liu, Feng
    Li, Haibo
    A pilot user's prospective in mobile robotic telepresence system2014Ingår i: 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014), IEEE, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.

  • 17.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Lu, Zhihan
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Royal Institute of Technology (KTH), Stockholm, Sweden.
    Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera2013Ingår i: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013, 2013, s. 149-153Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).

  • 18.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Lu, Zhihan
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    KTH.
    Tele-embodied agent (TEA) for video teleconferencing2013Ingår i: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia 2013, Luleå, Sweden, New York, 2013Konferensbidrag (Refereegranskat)
    Abstract [en]

    We propose a design of teleconference system which expressnonverbal behavior (in our case head gesture) along withaudio-video communication. Previous audio-video confer-encing systems are abortive in presenting nonverbal behav-iors which we, as human, usually use in face to face in-teraction. Recently, research in teleconferencing systemshas expanded to include nonverbal cues of remote person intheir distance communication. The accurate representationof non-verbal gestures for such systems is still challengingbecause they are dependent on hand-operated devices (likemouse or keyboard). Furthermore, they still lack in present-ing accurate human gestures. We believe that incorporatingembodied interaction in video teleconferencing, (i.e., usingthe physical world as a medium for interacting with digi-tal technology) can result in nonverbal behavior represen-tation. The experimental platform named Tele-EmbodiedAgent (TEA) is introduced which incorperates remote per-son’s head gestures to study new paradigm of embodied in-teraction in video teleconferencing. Our preliminary testshows accuracy (with respect to pose angles) and efficiency(with respect to time) of our proposed design. TEA canbe used in medical field, factories, offices, gaming industry,music industry and for training.

  • 19.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. University of East London, United Kingdom.
    Mi, Yongcui
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Naeem, Usman
    University of East London, United Kingdom.
    Beskow, Jonas
    The Royal Institute of Technology (KTH), Stockholm, Sweden.
    Li, Haibo
    The Royal Institute of Technology (KTH), Stockholm, Sweden.
    Moveable facial features in a Social Mediator2017Ingår i: Intelligent Virtual Agents: IVA 2017 / [ed] Beskow J., Peters C., Castellano G., O'Sullivan C., Leite I., Kopp S., Springer London, 2017, s. 205-208Konferensbidrag (Refereegranskat)
    Abstract [en]

    A brief display of facial features based behavior has a majorimpact on personality perception in human-human communications.Creating such personality traits and representations in a social robot isa challenging task. In this paper, we propose an approach for a roboticface presentation based on moveable 2D facial features and present acomparative study when a synthesized face is projected using three setups;1) 3D mask, 2) 2D screen, and 3) our 2D moveable facial featurebased visualization. We found that robot’s personality and character ishighly influenced by the projected face quality as well as the motion offacial features.

  • 20.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. University of East London, London, England.
    Söderström, Ulrik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Face-off: a Face Reconstruction Technique for Virtual Reality (VR) Scenarios2016Ingår i: Computer Vision: ECCV 2016 Workshops / [ed] Hua G., Jégou H., Springer, 2016, Vol. 9913, s. 490-503Konferensbidrag (Refereegranskat)
    Abstract [en]

    Virtual Reality (VR) headsets occlude a significant portion of human face. The real human face is required in many VR applications, for example, video teleconferencing. This paper proposes a wearable camera setup-based solution to reconstruct the real face of a person wearing VR headset. Our solution lies in the core of asymmetrical principal component analysis (aPCA). A user-specific training model is built using aPCA with full face, lips and eye region information. During testing phase, lower face region and partial eye information is used to reconstruct the wearer face. Online testing session consists of two phases, i) calibration phase and ii) reconstruction phase. In former, a small calibration step is performed to align test information with training data, while the later uses half face information to reconstruct the full face using aPCA-based trained-data. The proposed approach is validated with qualitative and quantitative analysis.

  • 21.
    Li, Bo
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Jevtic, Aleksandar
    Robosoft,France.
    Söderström, Ulrik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    KTH.
    Fast edge detection by center of mass2013Ingår i: The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013 (ICISIP2013), Kitakyushu, Japan, 2013, s. 103-110Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, a novel edge detection method that computes image gradient using the concept of Center of Mass (COM) is presented. The algorithm runs with a constant number of operations per pixel independently from its scale by using integral image. Compared with the conventional convolutional edge detector such as Sobel edge detector, the proposed method performs faster when region size is larger than 9×9. The proposed method can be used as framework for multi-scale edge detectors when the goal is to achieve fast performance. Experimental results show that edge detection by COM is competent with Canny edge detection.

  • 22.
    Li, Bo
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Söderström, Ulrik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    KTH.
    Restricted Hysteresis Reduce Redundancy in Edge Detection2013Ingår i: Journal of Signal and Information Processing, ISSN 2159-4465, E-ISSN 2159-4481, Vol. 4, nr 3B, s. 158-163Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In edge detection algorithms, there is a common redundancy problem, especially when the gradient direction is close to -135°, -45°, 45°, and 135°. Double edge effect appears on the edges around these directions. This is caused by the discrete calculation of non-maximum suppression. Many algorithms use edge points as feature for further task such as line extraction, curve detection, matching and recognition. Redundancy is a very important factor of algorithm speed and accuracy. We find that most edge detection algorithms have redundancy of 50% in the worst case and 0% in the best case depending on the edge direction distribution. The common redundancy rate on natural images is approximately between 15% and 20%. Based on Canny’s framework, we propose a restriction in the hysteresis step. Our experiment shows that proposed restricted hysteresis reduce the redundancy successfully.

  • 23.
    Li, Bo
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    i-Function of Electronic Cigarette: Building Social Network by Electronic Cigarette2011Ingår i: 2011 IEEE International Conferences on Internet of Things and Cyber, Physical and Social Computing / [ed] Feng Xia, Zhikui Chen, Gang Pan,Laurence T. Yang, and Jianhua Ma, Los Alamitos, CA, USA: IEEE Computer Society, 2011, s. 634-637Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper the role of electronic cigarette (e-cigarette) is considered in context of social networking and internet based help for smoking cessation or reduction in smoking behavior. Electronic cigarette can be a good conversation starter and interaction device. Its interestingness can be used for social network building and thus using virtual  communities (e.g. Facebook, Twitter etc.) to exchange experiences and to support each other. A framework of social network interaction through interact function (i-function) of electronic cigarette is presented which enables two e-cigarette users to immediate interact when they are in close range. The framework also presents a functional possibility of reflecting people’s emotion on social network websites.

  • 24.
    Li, Bo
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Söderström, Ulrik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    KTH.
    Independent Thresholds on Multi-scale Gradient Images2013Ingår i: The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013 (ICISIP2013), Kitakyushu, Japan, 2013, s. 124-131Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we propose a multi-scale edge detection algorithm based on proportional scale summing. Our analysis shows that proportional scale summing successfully improves edge detection rate by applying independent thresholds on multi-scale gradient images. The proposed method improves edge detection and localization by summing gradient images with a proportional parameter cn (c < 1); which ensures that the detected edges are as close as possible to the fine scale. We employ non-maxima suppression and thinning step similar to Canny edge detection framework on the summed gradient images. The proposed method can detect edges successfully and experimental results show that it leads to better edge detection performance than Canny edge detector and scale multiplication edge detector.

  • 25.
    Li, Liu
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Lindahl, Olof
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Vibrotactile chair: A social interface for blind2006Ingår i: Proceedings SSBA 2006: Symposium on image analysis, Umeå, March 16-17, 2006 / [ed] Fredrik Georgsson, 1971-, Niclas Börlin, 1968-, Umeå: Umeå universitet. Institutionen för datavetenskap , 2006, s. 117-120Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    In this paper we present our vibrotactile chair, a social interface for the blind. With this chair the blind can get on-line emotion information from the person he / she is heading to. This greatly enhances communication ability and improve the quality of social life of the blind. In the paper we are discussing technical challenges and design principles behind the chair, and introduce the experimental platform: tactile facial expression appearance recognition system (TEARS)TM".

  • 26.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Chinese Academy of Science, China.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Palestine Polytechnic University.
    Feng, Shengzhong
    Chinese Academy of Science, China.
    Li, Haibo
    Royal Institute of Technology, Stockholm, Sweden.
    Ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Multimodal Hand and Foot Gesture Interaction for Handheld Devices2014Ingår i: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), ISSN 1551-6857, E-ISSN 1551-6865, Vol. 11, nr 1, artikel-id 10Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We present a hand-and-foot-based multimodal interaction approach for handheld devices. Our method combines input modalities (i.e., hand and foot) and provides a coordinated output to both modalities along with audio and video. Human foot gesture is detected and tracked using contour-based template detection (CTD) and Tracking-Learning-Detection (TLD) algorithm. 3D foot pose is estimated from passive homography matrix of the camera. 3D stereoscopic and vibrotactile are used to enhance the immersive feeling. We developed a multimodal football game based on the multimodal approach as a proof-of-concept. We confirm our systems user satisfaction through a user study.

  • 27.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Khan, Muhammad Sikandar Lal
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Rehman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    KTH.
    Finger in air: touch-less interaction on smartphone2013Ingår i: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia2013, Luleå, Sweden, luleå, sweden, 2013Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a vision based intuitive interactionmethod for smart mobile devices. It is based on markerlessfinger gesture detection which attempts to provide a ‘naturaluser interface’. There is no additional hardware necessaryfor real-time finger gesture estimation. To evaluate thestrengths and effectiveness of proposed method, we designtwo smart phone applications, namely circle menu application- provides user with graphics and smart phone’s statusinformation, and bouncing ball game- a finger gesture basedbouncing ball application. The users interact with these applicationsusing finger gestures through the smart phone’scamera view, which trigger the interaction event and generateactivity sequences for interactive buffers. Our preliminaryuser study evaluation demonstrates effectiveness andthe social acceptability of proposed interaction approach.

  • 28.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Sikandar Lal Khan, Muhammad
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Hand and Foot Gesture Interaction for Handheld Devices2013Ingår i: MM '13 Proceedings of the 21st ACM international conference on Multimedia, New York, NY, USA: ACM , 2013, s. 621-624Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present hand and foot based immersive multimodal interaction approach for handheld devices. A smart phone based immersive football game is designed as a proof of concept. Our proposed method combines input modalities (i.e. hand & foot) and provides a coordinated output to both modalities along with audio and video. In this work, human foot gesture is detected and tracked using template matching method and Tracking-Learning-Detection (TLD) framework. We evaluated our system's usability through a user study in which we asked participants to evaluate proposed interaction method. Our preliminary evaluation demonstrates the efficiency and ease of use of proposed multimodal interaction approach.

  • 29.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. SIAT, Chinese Academy of Science, China.
    ur Rehman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. SIAT, Chinese Academy of Science, China.
    Multi-Gesture based Football Game in Smart Phones2013Ingår i: SA '13 SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications, NY, USA: Association for Computing Machinery (ACM), 2013Konferensbidrag (Refereegranskat)
  • 30.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Touch-less interaction smartphone on go!2013Ingår i: Proceeding of SIGGRAPH Asia 2013, ACM New York, NY, USA, 2013Konferensbidrag (Refereegranskat)
    Abstract [en]

    A smartphone touch-less interaction based on mixed hardware and software is proposed in this work. The software application renders circle menu application graphics and status information using smart phone’s screen, audio. Augmented reality image rendering technology is employed for a convenient finger-phone interaction. The users interact with the application using finger gesture motion behind the camera, which trigger the interaction event and generate activity sequences for interactive buffers. The combination of Contour based Template Matching (CTM) and Tracking-Learning-Detection (TLD) provides a core support for hand-gesture interaction by accurately detecting and tracking the hand gesture.

  • 31.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Chen, Ge
    Ocean University China, Qingdao.
    WebVRGIS: a P2P network engine for VR data and GIS analysis2013Ingår i: Lecture Notes in Computer Science: Neural Information Processing / [ed] Minho Lee, Akira Hirose, Zeng-Guang Hou, Rhee Man Kil, Springer Berlin Heidelberg, 2013, s. 503-510Konferensbidrag (Refereegranskat)
    Abstract [en]

    A Peer-to-peer(P2P) network engine for geographic VR data and GIS analysis on 3D Globe is proposed, which synthesizes several latest information technologies including web virtual reality(VR), 3D geographical information system(GIS), 3D visualization and P2P network. The engine is used to organize and present massive spatial data such as remote sensing data, meanwhile to share and online publish by P2P based on hash. The P2P network makes a mapping of the users in real geographic space and the user avatar in the virtual scene, as well as the nodes in the virtual network. It also supports the integrated VRGIS functions including 3D spatial analysis functions, 3D visualization for spatial process and serves as a web engine for 3D globe and digital city.

  • 32.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Chen, Ge
    Ocean University of China, Qingdao, China.
    WebVRGIS: WebGIS based interactive online 3D virtual community2013Ingår i: 2013 International Conference on Virtual Reality and Visualization (ICVRV 2013), Institute of Electrical and Electronics Engineers (IEEE), 2013, s. 94-99Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present a WebVRGIS based Interactive Online 3D Virtual Community which is achieved based on WebGIS technology and web VR technology. It is Multi-Dimensional(MD) web geographic information system (WebGIS) based 3D interactive online virtual community which is a virtual real-time 3D communication systems and web systems development platform. It is capable of running on a variety of browsers. In this work, four key issues are studied: (1) Multi-source MD geographical data fusion of the WebGIS, (2) scene combination with 3D avatar, (3) massive data network dispatch, and (4) multi-user avatar real-time interactive. Our system is divided into three modules: data preprocessing, background management and front end user interaction. The core of the front interaction module is packaged in the MD map expression engine 3GWebMapper and the free plug-in network 3D rendering engine WebFlashVR. We have evaluated the robustness of our system on three campus of Ocean University of China(OUC) as a testing base. The results shows high efficiency, easy to use and robustness of our system.

  • 33.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Khan, Muhammad Sikandar Lal
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Royal Institute of Technology (KTH), Sweden..
    Anaglyph 3D stereoscopic visualization of 2D video based on fundamental matrix2013Ingår i: Proceedings of 2013 International Conferenceon Virtual Reality and Visualization (ICVRV 2013), 14-15 September 2013, Xi'an, Shaanxi, China, 2013Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper, we propose a simple Anaglyph 3Dstereo generation algorithm from 2D video sequence with monocularcamera. In our novel approach we employ camera poseestimation method to directly generate stereoscopic 3D from 2Dvideo without building depth map explicitly. Our cost effectivemethod is suitable for arbitrary real-world video sequence andproduces smooth results. We use image stitching based on planecorrespondence using fundamental matrix. To this end we alsodemonstrate that correspondence plane image stitching based onHomography matrix only cannot generate better result. Furthermore,we utilize the structure from motion (with fundamentalmatrix) based reconstructed camera pose model to accomplishvisual anaglyph 3D illusion. The proposed approach demonstratesa very good performance for most of the video sequences.

  • 34.
    LV, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Feng, Shengzhong
    Chinese Academy of Science, China.
    Khan, Muhammad Sikandar Lal
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Ur Rehman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Foot motion sensing: augmented game interface based on foot interaction for smartphone2014Ingår i: CHI EA '14 CHI '14 Extended Abstracts on Human Factors in Computing Systems, ACM, 2014, s. 293-296Konferensbidrag (Refereegranskat)
    Abstract [en]

    We designed and developmented two games: real-timeaugmented football game and augmented foot pianogame to demonstrate a innovative interface based onfoot motion sensing approach for smart phone. In theproposed novel interface, the computer vision basedhybrid detection and tracking method provides a coresupport for foot interaction interface by accuratelytracking the shoes. Based on the proposed interactioninterface, wo demonstrations are developed, theapplications employ augmented reality technology torender the game graphics and game status informationon smart phones screen. The players interact with thegame using foot interaction toward the rear camera,which triggers the interaction event. This interfacesupports basic foot motion sensing (i.e. direction ofmovement, velocity, rhythm).

  • 35. Lv, Zhihan
    et al.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Computer Engineering and Science Department, Palestine Polytechnic University, Hebron, Palestine.
    Feng, Shengzhong
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Touch-less interactive augmented reality game on vision-based wearable device2015Ingår i: Personal and Ubiquitous Computing, ISSN 1617-4909, E-ISSN 1617-4917, Vol. 19, nr 3-4, s. 551-567Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    There is an increasing interest in creating pervasive games based on emerging interaction technologies. In order to develop touch-less, interactive and augmented reality games on vision-based wearable device, a touch-less motion interaction technology is designed and evaluated in this work. Users interact with the augmented reality games with dynamic hands/feet gestures in front of the camera, which triggers the interaction event to interact with the virtual object in the scene. Three primitive augmented reality games with eleven dynamic gestures are developed based on the proposed touch-less interaction technology as proof. At last, a comparing evaluation is proposed to demonstrate the social acceptability and usability of the touch-less approach, running on a hybrid wearable framework or with Google Glass, as well as workload assessment, user’s emotions and satisfaction.

  • 36. Meurisch, Christian
    et al.
    Günther, Sebastian
    Naeem, Usman
    Baumann, Paul
    Scholl, Philipp M.
    Ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Azam, Muhammad Awais
    Mühlhäuser, Max
    SmartGuidance'17: 2nd Workshop on Intelligent Personal Support of Human Behavior2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    In today's fast-paced environment, humans are faced with various problems such as information overload, stress, health and social issues. So-called anticipatory systems promise to approach these issues through personal guidance or support within a user's daily and professional life. The Second Workshop on Intelligent Personal Support of Human Behavior (SmartGuidance'17) aims to build on the success of the previous workshop (namely Smarticipation) organized in conjunction with UbiComp 2016, to continue discussing the latest research outcomes of anticipatory mobile systems. We invite the submission of papers within this emerging, interdisciplinary research field of anticipatory mobile computing that focuses on understanding, design, and development of such ubiquitous systems. We also welcome contributions that investigate human behaviors, underlying recognition and prediction models; conduct field studies; as well as propose novel HCI techniques to provide personal support. All workshop contributions will be published in supplemental proceedings of the UbiComp 2017 conference and included in the ACM Digital Library.

  • 37.
    Ortiz Morales, Daniel
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    La Hera, Pedro
    Sveriges lantbruksuniversitet .
    Ur Rehman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Generating Periodic Motions for the Butterfly Robot2013Ingår i: Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on / [ed] Amato, N., IEEE conference proceedings, 2013, s. 2527-2532Konferensbidrag (Refereegranskat)
    Abstract [en]

    We analyze the problem of dynamic non-prehensile manipulation by considering the example of thebutterfly robot. Our main objective is to study the problem of stabilizing periodic motions, which resemble some form of juggling acrobatics. To this end, we approach the problem by considering theframework of virtual holonomic constraints. Under this basis, we provide an analytical and systematic solution to the problems of trajectory planning and design of feedback controllers to guarantee orbital exponential stability. Results are presented in the form of simulation tests.

  • 38.
    Pizzamiglio, Sara
    et al.
    School of Architecture, Computing and Engineering, University of East London, United Kingdom.
    Naeem, Usman
    School of Architecture, Computing and Engineering, University of East London, United Kingdom.
    Ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. School of Architecture, Computing and Engineering, University of East London, United Kingdom.
    Sharif, Muhammad Saeed
    School of Architecture, Computing and Engineering, University of East London, United Kingdom.
    Abdalla, Hassan
    School of Architecture, Computing and Engineering, University of East London, United Kingdom.
    Turner, Duncan L.
    Neurorehabilitation Unit, School of Health, Sport and Bioscience, University of East London, United Kingdom.
    A multimodal approach to measure the distraction levels of pedestrians using mobile sensing2017Ingår i: Procedia Computer Science, ISSN 1877-0509, E-ISSN 1877-0509, Vol. 113, s. 89-96Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The emergence of smart phones has had a positive impact on society as the range of features and automation has allowed people to become more productive while they are on the move. On the contrary, the use of these devices has also become a distraction and hindrance, especially for pedestrians who use their phones whilst walking on the streets. This is reinforced by the fact that pedestrian injuries due to the use of mobile phones has now exceeded mobile phone related driver injuries. This paper describes an approach that measures the different levels of distraction encountered by pedestrians whilst they are walking. To distinguish between the distractions within the brain the proposed work analyses data collected from mobile sensors (accelerometers for movement, mobile EEG for electroencephalogram signals from the brain). The long-term motivation of the proposed work is to provide pedestrians with notifications as they approach potential hazards while they walk on the street conducting multiple tasks such as using a smart phone.

  • 39. Quan, Zhou
    et al.
    Rehman, Shafiq Ur
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Yu, Zhou
    Xin, Wei
    Lei, Wang
    Baoyu, Zheng
    Face Recognition Using Dense SIFT Feature Alignment2016Ingår i: Chinese journal of electronics, ISSN 1022-4653, E-ISSN 2075-5597, Vol. 25, nr 6, s. 1034-1039Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper addresses face recognition problem in a more challenging scenario where the training and test samples are both subject to the visual variations of poses, expressions and misalignments. We employ dense Scale-invariant feature transform (SIFT) feature matching as a generic transformation to roughly align training samples; and then identify input facial images via an improved sparse representation model based on the aligned training samples. Compared with previous methods, the extensive experimental results demonstrate the effectiveness of our method for the task of face recognition on three benchmark datasets.

  • 40.
    Shafiq, ur Réhman
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Royal Institute of Technology (KTH), Sweden..
    Using Vibrotactile Language for Multimodal Human Animals Communication and Interaction2014Ingår i: Proceedings of the 2014 Workshops on Advances in Computer Entertainment Conference, ACE '14, Association for Computing Machinery (ACM), 2014, s. 1:1-1:5Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this work we aim to facilitate computer mediated multimodal communication and interaction between human and animal based on vibrotactile stimuli. To study and influence the behavior of animals, usually researchers use 2D/3D visual stimuli. However we use vibrotactile pattern based language which provides the opportunity to communicate and interact with animals. We have performed experiment with a vibrotactile based human-animal multimodal communication system to study the effectiveness of vibratory stimuli applied to the animal skin along with audio and visual stimuli. The preliminary results are encouraging and indicate that low-resolution tactual displays are effective in transmitting information.

  • 41.
    ur Rehman, Shafiq
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Liu, Li
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    iFeeling: vibrotactile rendering of human emotions on mobile phones2010Ingår i: Mobile multimedia processing: fundamentals, methods, and applications, Springer, 2010, s. 1-20Konferensbidrag (Refereegranskat)
    Abstract [en]

    Today, the mobile phone technology is mature enough to enable us to effectively interact with mobile phones using our three major senses namely, vision, hearing and touch. Similar to the camera, which adds interest and utility to mobile experience, the vibration motor in a mobile phone could give us a new possibility to improve interactivity and usability of mobile phones. In this chapter, we show that by carefully controlling vibration patterns, more than 1-bit information can be rendered with a vibration motor. We demonstrate how to turn a mobile phone into a social interface for the blind so that they can sense emotional information of others. The technical details are given on how to extract emotional information, design vibrotactile coding schemes, render vibrotactile patterns, as well as how to carry out user tests to evaluate its usability. Experimental studies and users tests have shown that we do get and interpret more than one bit emotional information. This shows a potential to enrich mobile phones communication among the users through the touch channel.

  • 42.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Expressing emotions through vibration for perception and control2010Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    This thesis addresses a challenging problem: “how to let the visually impaired ‘see’ others emotions”. We, human beings, are heavily dependent on facial expressions to express ourselves. A smile shows that the person you are talking to is pleased, amused, relieved etc. People use emotional information from facial expressions to switch between conversation topics and to determine attitudes of individuals. Missing emotional information from facial expressions and head gestures makes the visually impaired extremely difficult to interact with others in social events. To enhance the visually impaired’s social interactive ability, in this thesis we have been working on the scientific topic of ‘expressing human emotions through vibrotactile patterns’.

    It is quite challenging to deliver human emotions through touch since our touch channel is very limited. We first investigated how to render emotions through a vibrator. We developed a real time “lipless” tracking system to extract dynamic emotions from the mouth and employed mobile phones as a platform for the visually impaired to perceive primary emotion types. Later on, we extended the system to render more general dynamic media signals: for example, render live football games through vibration in the mobile for improving mobile user communication and entertainment experience. To display more natural emotions (i.e. emotion type plus emotion intensity), we developed the technology to enable the visually impaired to directly interpret human emotions. This was achieved by use of machine vision techniques and vibrotactile display. The display is comprised of a ‘vibration actuators matrix’ mounted on the back of a chair and the actuators are sequentially activated to provide dynamic emotional information. The research focus has been on finding a global, analytical, and semantic representation for facial expressions to replace state of the art facial action coding systems (FACS) approach. We proposed to use the manifold of facial expressions to characterize dynamic emotions. The basic emotional expressions with increasing intensity become curves on the manifold extended from the center. The blends of emotions lie between those curves, which could be defined analytically by the positions of the main curves. The manifold is the “Braille Code” of emotions.

    The developed methodology and technology has been extended for building assistive wheelchair systems to aid a specific group of disabled people, cerebral palsy or stroke patients (i.e. lacking fine motor control skills), who don’t have ability to access and control the wheelchair with conventional means, such as joystick or chin stick. The solution is to extract the manifold of the head or the tongue gestures for controlling the wheelchair. The manifold is rendered by a 2D vibration array to provide user of the wheelchair with action information from gestures and system status information, which is very important in enhancing usability of such an assistive system. Current research work not only provides a foundation stone for vibrotactile rendering system based on object localization but also a concrete step to a new dimension of human-machine interaction.

  • 43.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Lip Localization and Tracking: A Survey2007Rapport (Övrigt vetenskapligt)
    Abstract [en]

    We explore the classification and performance of various techniques for the human-lip extraction and tracking in video sequence. This survey identifies the potential challenges and overview the recent techniques. Considering the previous efforts, it will discuss the limitation and points out performance parameters for lip region localization and tracking techniques. The scope of this survey is limited to lip feature extraction techniques and does not include human face detection or recognition. It concludes with some thought of new direction in the field.

  • 44.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Tactile Patterns and Perception: A Pre-study Report2005Rapport (Övrigt vetenskapligt)
  • 45.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Vibrotactile rendering of dynamic media signals2008Licentiatavhandling, monografi (Övrigt vetenskapligt)
  • 46.
    ur Réhman, Shafiq
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Khan, Abdullah
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Li, Haibo
    School of Computer Science and Communication, Royal Institute of Technology (KTH), Sweden.
    Interactive Feet for Mobile Immersive Interaction2012Ingår i: MobileHCI 2012: Mobile Vision (MobiVis) – Vision-based Applications and HCI, San Fransisco, USA: MOBIVIS , 2012Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    In this paper we propose a novel algorithm for foot-gesture tracking method in mobile phones. To evaluate our proposed algorithm we develop two application scenarios for mobile immersive interaction experience based on audio, vibrotactile and foot interactions. For current studies we have located and tracked foot-gesture using template matching algorithm. The strength of proposed algorithm is demonstrated based on a successful completion of the given tasks. In the first application scenario the user is presented with an immersive fun dialing, i.e. dialing desired phone numbers using foot-gestures, while in the second application scenario, the user is provided with an immersive music game for unlocking keypad using foot-gesture on a smart phone. Our algorithm not only successfully locates and tracks foot-gesture but also can detect and track shoe of any size. These studies show the effectiveness of foot-gesture on mobile phones in real life situations.

  • 47.
    ur Réhman, Shafiq
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Khan, Muhammad Sikandar Lal
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Liu, Li
    Nanjing University of Posts and Telecommunications, Nanjing, China.
    Li, Haibo
    Media technology and interaction design, Royal Institute of Technology (KTH), Sweden; Nanjing University of Posts and Telecommunications, Nanjing, China.
    Vibrotactile TV for immersive experience2014Ingår i: Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific, 2014Konferensbidrag (Refereegranskat)
    Abstract [en]

    Audio and video are two powerful media forms to shorten the distance between audience/viewer and actors or players in the TV and films. The recent research shows that today people are using more and more multimedia contents on mobile devices, such as tablets and smartphones. Therefore, an important question emerges - how can we render high-quality, personal immersive experiences to consumers on these systems? To give audience an immersive engagement that differs from `watching a play', we have designed a study to render complete immersive media which include the `emotional information' based on augmented vibrotactile-coding on the back of the user along with audio-video signal. The reported emotional responses to videos viewed with and without haptic enhancement, show that participants exhibited an increased emotional response to media with haptic enhancement. Overall, these studies suggest that the effectiveness of our approach and using a multisensory approach increase immersion and user satisfaction.

  • 48.
    ur Réhman, Shafiq
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Liu, Li
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    iFeeling: Vibrotactile rendering of human emotions on mobile phones2010Ingår i: Mobile multimedia processing: fundamentals, methods, and applications / [ed] Xiaoyi Jiang, MatthewY. Ma, Chang Wen Chen, Heidelberg, Germany: Springer Berlin , 2010, 1st Edition, s. 1-20Kapitel i bok, del av antologi (Övrigt vetenskapligt)
    Abstract [en]

    Today, the mobile phone technology is mature enough to enable us to effectively interact with mobile phones using our three major senses namely, vision, hearing and touch. Similar to the camera, which adds interest and utility to mobile experience, the vibration motor in a mobile phone could give us a new possibility to improve interactivity and usability of mobile phones. In this chapter, we show that by carefully controlling vibration patterns, more than 1-bit information can be rendered with a vibration motor. We demonstrate how to turn a mobile phone into a social interface for the blind so that they can sense emotional information of others. The technical details are given on how to extract emotional information, design vibrotactile coding schemes, render vibrotactile patterns, as well as how to carry out user tests to evaluate its usability. Experimental studies and users tests have shown that we do get and interpret more than one bit emotional information. This shows a potential to enrich mobile phones communication among the users through the touch channel.

  • 49.
    ur Réhman, Shafiq
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    liu, li
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Real-time lip tracking for emotion understanding2006Ingår i: Swedish Symposium on Image Analysis: Symposium on image analysis, Umeå, March 16-17, 2006 / [ed] Georgsson, Fredrik, 1971-, Börlin, Niclas, 1968-, Umeå: Umeå universitet. Institutionen för datavetenskap , 2006, s. 29-32Konferensbidrag (Övrigt vetenskapligt)
  • 50.
    ur Réhman, Shafiq
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Liu, Li
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Sensing expressive lips with a mobile phone2008Ingår i: Proceedings of the 1st international workshop on mobile multimedia processing: In conjunction with 19th international conference on pattern recognition, Tampa, Florida, USA, 2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    Considering potential benefits of vibrations in mobile phones,we propose an intuitive method to render human emotions for the vi-sually impaired. A mobile phone is "synchronized" with emotional in-formation extracted from human lips dynamics. By holding the mobilephone, the user will be able to get on-line emotion information of others.Experimental results based on usability evaluation of the system are encouraging. The user studies show a perfect pattern recognition accuracy on the designed vibration patterns.

12 1 - 50 av 65
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf