umu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
ur Réhman, Shafiq
Alternative names
Publications (10 of 65) Show all publications
Khan, M. S., Halawani, A., ur Réhman, S. & Li, H. (2018). Action Augmented Real Virtuality Design for Presence. IEEE Transactions on Cognitive and Developmental Systems, 10(4), 961-972
Open this publication in new window or tab >>Action Augmented Real Virtuality Design for Presence
2018 (English)In: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 10, no 4, p. 961-972Article in journal (Refereed) Published
Abstract [en]

This paper addresses the important question of how to design a video teleconferencing setup to increase the experience of spatial and social presence. Traditional video teleconferencing setups are lacking in presenting the nonverbal behaviors that humans express in face-to-face communication, which results in decrease in presence-experience. In order to address this issue, we first present a conceptual framework of presence for video teleconferencing. We introduce a modern presence concept called real virtuality and propose a new way of achieving this based on body or artifact actions to increase the feeling of presence, and we named this concept presence through actions. Using this new concept, we present the design of a novel action-augmented real virtuality prototype that considers the challenges related to the design of an action prototype, action embodiment, and face representation. Our action prototype is a telepresence mechatronic robot (TEBoT), and action embodiment is through a head-mounted display (HMD). The face representation solves the problem of face occlusion introduced by the HMD. The novel combination of HMD, TEBoT, and face representation algorithm has been tested in a real video teleconferencing scenario for its ability to solve the challenges related to spatial and social presence. We have performed a user study where the invited participants were requested to experience our novel setup and to compare it with a traditional video teleconferencing setup. The results show that the action capabilities not only increase the feeling of spatial presence but also increase the feeling of social presence of a remote person among local collaborators.

Keywords
Real virtuality, Virtual reality, Embodiment, Telepresence, Actions, Perception, Embodied telepresence system, Webrtc, Face occlusion, Face retrieval.
National Category
Signal Processing Computer Systems
Research subject
Computing Science; Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-138278 (URN)10.1109/TCDS.2018.2828865 (DOI)000452636400012 ()
Available from: 2017-08-16 Created: 2017-08-16 Last updated: 2019-01-07Bibliographically approved
Augustian, M., ur Réhman, S., Sandvig, A., Kotikawatte, T., Yongcui, M. & Evensmoen, H. R. (2018). EEG Analysis from Motor Imagery to Control a Forestry Crane. In: Karwowski, Waldemar, Ahram, Tareq (Ed.), Intelligent Human Systems Integration (IHSI 2018): . Paper presented at 1st International Conference on Intelligent Human Systems Integration: Integrating People and Intelligent Systems, (IHSI 2018), January 7-9, 2018, Dubai, United Arab Emirates (pp. 281-286). , 722
Open this publication in new window or tab >>EEG Analysis from Motor Imagery to Control a Forestry Crane
Show others...
2018 (English)In: Intelligent Human Systems Integration (IHSI 2018) / [ed] Karwowski, Waldemar, Ahram, Tareq, 2018, Vol. 722, p. 281-286Conference paper, Published paper (Refereed)
Abstract [en]

Brain-computer interface (BCI) systems can provide people with ability to communicate and control real world systems using neural activities. Therefore, it makes sense to develop an assistive framework for command and control of a future robotic system which can assist the human robot collaboration. In this paper, we have employed electroencephalographic (EEG) signals recorded by electrodes placed over the scalp. The human-hand movement based motor imagery mentalization is used to collect brain signals over the motor cortex area. The collected µ-wave (8–13 Hz) EEG signals were analyzed with event-related desynchronization/synchronization (ERD/ERS) quantification to extract a threshold between hand grip and release movement and this information can be used to control forestry crane grasping and release functionality. The experiment was performed with four healthy persons to demonstrate the proof-of concept BCI system. From this study, it is demonstrated that the proposed method has potential to assist the manual operation of crane operators performing advanced task with heavy cognitive work load.

Series
Advances in Intelligent Systems and Computing (AISC), ISSN 2194-5357 ; 722
Keywords
Brain-computer interface (BCI), Mu-wave Motor imagery, Event-related desynchronization (ERD), Event-related synchronization (ERS), Forestry crane, Assistive technologies, HCI
National Category
Interaction Technologies Communication Systems Signal Processing Robotics Human Computer Interaction Neurology
Research subject
Computer and Information Science; Computer Systems; Clinical Neurophysiology; Electronics
Identifiers
urn:nbn:se:umu:diva-143918 (URN)10.1007/978-3-319-73888-8_44 (DOI)2-s2.0-85040229502 (Scopus ID)978-3-319-73887-1 (ISBN)978-3-319-73888-8 (ISBN)
Conference
1st International Conference on Intelligent Human Systems Integration: Integrating People and Intelligent Systems, (IHSI 2018), January 7-9, 2018, Dubai, United Arab Emirates
Available from: 2018-01-15 Created: 2018-01-15 Last updated: 2018-06-09Bibliographically approved
Harisubramanyabalaji, S. P., ur Réhman, S., Nyberg, M. & Gustavsson, J. (2018). Improving Image Classification Robustness Using Predictive Data Augmentation. In: Gallina B., Skavhaug A., Schoitsch E., Bitsch F. (Ed.), Computer Safety, Reliability, and Security: SAFECOMP 2018. Paper presented at 37th International Conference on Computer Safety, Reliability, and Security (SAFECOMP), Västerås, Sweden, 18-21 September, 2018 (pp. 548-561). Springer
Open this publication in new window or tab >>Improving Image Classification Robustness Using Predictive Data Augmentation
2018 (English)In: Computer Safety, Reliability, and Security: SAFECOMP 2018 / [ed] Gallina B., Skavhaug A., Schoitsch E., Bitsch F., Springer, 2018, p. 548-561Conference paper, Published paper (Refereed)
Abstract [en]

Safer autonomous navigation might be challenging if there is a failure in sensing system. Robust classifier algorithm irrespective of camera position, view angles, and environmental condition of an autonomous vehicle including different size & type (Car, Bus, Truck, etc.) can safely regulate the vehicle control. As training data play a crucial role in robust classification of traffic signs, an effective augmentation technique enriching the model capacity to withstand variations in urban environment is required. In this paper, a framework to identify model weakness and targeted augmentation methodology is presented. Based on off-line behavior identification, exact limitation of a Convolutional Neural Network (CNN) model is estimated to augment only those challenge levels necessary for improved classifier robustness. Predictive Augmentation (PA) and Predictive Multiple Augmentation (PMA) methods are proposed to adapt the model based on acquired challenges with a high numerical value of confidence. We validated our framework on two different training datasets and with 5 generated test groups containing varying levels of challenge (simple to extreme). The results show impressive improvement by 5-20% in overall classification accuracy thereby keeping their high confidence.

Place, publisher, year, edition, pages
Springer, 2018
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 11094
Keywords
Safety-risk assessment, Predictive augmentation, Convolutional neural network, Traffic sign classification, Real-time challenges
National Category
Computer Vision and Robotics (Autonomous Systems) Transport Systems and Logistics
Identifiers
urn:nbn:se:umu:diva-157245 (URN)10.1007/978-3-319-99229-7_49 (DOI)000458807000049 ()978-3-319-99228-0 (ISBN)978-3-319-99229-7 (ISBN)
Conference
37th International Conference on Computer Safety, Reliability, and Security (SAFECOMP), Västerås, Sweden, 18-21 September, 2018
Available from: 2019-03-18 Created: 2019-03-18 Last updated: 2019-03-18Bibliographically approved
Pizzamiglio, S., Naeem, U., Ur Réhman, S., Sharif, M. S., Abdalla, H. & Turner, D. L. (2017). A multimodal approach to measure the distraction levels of pedestrians using mobile sensing. Paper presented at The 8th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN 2017), September 18-20, 2017, Lund, Sweden. Procedia Computer Science, 113, 89-96
Open this publication in new window or tab >>A multimodal approach to measure the distraction levels of pedestrians using mobile sensing
Show others...
2017 (English)In: Procedia Computer Science, ISSN 1877-0509, E-ISSN 1877-0509, Vol. 113, p. 89-96Article in journal (Refereed) Published
Abstract [en]

The emergence of smart phones has had a positive impact on society as the range of features and automation has allowed people to become more productive while they are on the move. On the contrary, the use of these devices has also become a distraction and hindrance, especially for pedestrians who use their phones whilst walking on the streets. This is reinforced by the fact that pedestrian injuries due to the use of mobile phones has now exceeded mobile phone related driver injuries. This paper describes an approach that measures the different levels of distraction encountered by pedestrians whilst they are walking. To distinguish between the distractions within the brain the proposed work analyses data collected from mobile sensors (accelerometers for movement, mobile EEG for electroencephalogram signals from the brain). The long-term motivation of the proposed work is to provide pedestrians with notifications as they approach potential hazards while they walk on the street conducting multiple tasks such as using a smart phone.

Place, publisher, year, edition, pages
Elsevier, 2017
Keywords
multimodal signal processing, distraction, HCI, electroencephalogram (EEG) signals, pedestrian safety, safety awareness, mobile sensing, walking behavior, working memory
National Category
Signal Processing Computer Systems Medical Laboratory and Measurements Technologies Interaction Technologies
Research subject
Signal Processing; Computing Science
Identifiers
urn:nbn:se:umu:diva-139945 (URN)10.1016/j.procs.2017.08.297 (DOI)000419236500011 ()
Conference
The 8th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN 2017), September 18-20, 2017, Lund, Sweden
Available from: 2017-09-27 Created: 2017-09-27 Last updated: 2018-06-09Bibliographically approved
Ehatisham-ul-Haq, M., Awais Azam, M., Naeem, U., Ur Rèhman, S. & Khaild, A. (2017). Identifying smartphone users based on their activity patterns via mobile sensing. Paper presented at The 8th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN 2017), September 18-20, 2017, Lund, Sweden.. Procedia Computer Science, 113, 202-209
Open this publication in new window or tab >>Identifying smartphone users based on their activity patterns via mobile sensing
Show others...
2017 (English)In: Procedia Computer Science, ISSN 1877-0509, E-ISSN 1877-0509, Vol. 113, p. 202-209Article in journal (Refereed) Published
Abstract [en]

Smartphones are ubiquitous devices that enable users to perform many of their routine tasks anytime and anywhere. With the advancement in information technology, smartphones are now equipped with sensing and networking capabilities that provide context-awareness for a wide range of applications. Due to ease of use and access, many users are using smartphones to store their private data, such as personal identifiers and bank account details. This type of sensitive data can be vulnerable if the device gets lost or stolen. The existing methods for securing mobile devices, including passwords, PINs and pattern locks are susceptible to many bouts such as smudge attacks. This paper proposes a novel framework to protect sensitive data on smartphones by identifying smartphone users based on their behavioral traits using smartphone embedded sensors. A series of experiments have been conducted for validating the proposed framework, which demonstrate its effectiveness.

Place, publisher, year, edition, pages
Elsevier, 2017
Keywords
activity recognition, behavioral biometrics, continuous sensing, mobile device security, data privacy, mobile sensing, ubiquitous computing, user identification
National Category
Computer Systems Signal Processing Interaction Technologies
Research subject
Computer and Information Science
Identifiers
urn:nbn:se:umu:diva-139946 (URN)10.1016/j.procs.2017.08.349 (DOI)000419236500025 ()
Conference
The 8th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN 2017), September 18-20, 2017, Lund, Sweden.
Available from: 2017-09-27 Created: 2017-09-27 Last updated: 2018-06-09Bibliographically approved
Khan, M. S., ur Réhman, S., Mi, Y., Naeem, U., Beskow, J. & Li, H. (2017). Moveable facial features in a Social Mediator. In: Beskow J., Peters C., Castellano G., O'Sullivan C., Leite I., Kopp S. (Ed.), Intelligent Virtual Agents: IVA 2017. Paper presented at 17th International Conference on Intelligent Virtual Agents (IVA 2017), Stockholm, August 27-30, 2017. (pp. 205-208). Springer London
Open this publication in new window or tab >>Moveable facial features in a Social Mediator
Show others...
2017 (English)In: Intelligent Virtual Agents: IVA 2017 / [ed] Beskow J., Peters C., Castellano G., O'Sullivan C., Leite I., Kopp S., Springer London, 2017, p. 205-208Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

A brief display of facial features based behavior has a majorimpact on personality perception in human-human communications.Creating such personality traits and representations in a social robot isa challenging task. In this paper, we propose an approach for a roboticface presentation based on moveable 2D facial features and present acomparative study when a synthesized face is projected using three setups;1) 3D mask, 2) 2D screen, and 3) our 2D moveable facial featurebased visualization. We found that robot’s personality and character ishighly influenced by the projected face quality as well as the motion offacial features.

Place, publisher, year, edition, pages
Springer London, 2017
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 10498
Keywords
Social robots, telepresence system, facial features, feature tracking, face robot
National Category
Engineering and Technology Computer Systems Signal Processing
Research subject
Computer Science; design
Identifiers
urn:nbn:se:umu:diva-138276 (URN)10.1007/978-3-319-67401-8_23 (DOI)000455400000023 ()978-3-319-67400-1 (ISBN)978-3-319-67401-8 (ISBN)
Conference
17th International Conference on Intelligent Virtual Agents (IVA 2017), Stockholm, August 27-30, 2017.
Available from: 2017-08-16 Created: 2017-08-16 Last updated: 2019-09-05Bibliographically approved
Meurisch, C., Günther, S., Naeem, U., Baumann, P., Scholl, P. M., Ur Réhman, S., . . . Mühlhäuser, M. (2017). SmartGuidance'17: 2nd Workshop on Intelligent Personal Support of Human Behavior. In: : . Paper presented at ACM International Joint Conference on Pervasive and Ubiquitous Computing (UBICOMP) / ACM International Symposium on Wearable Computers (ISWC), Maui, Hawaii, SEP 11-15, 2017 (pp. 623-626). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>SmartGuidance'17: 2nd Workshop on Intelligent Personal Support of Human Behavior
Show others...
2017 (English)Conference paper, Published paper (Refereed)
Abstract [en]

In today's fast-paced environment, humans are faced with various problems such as information overload, stress, health and social issues. So-called anticipatory systems promise to approach these issues through personal guidance or support within a user's daily and professional life. The Second Workshop on Intelligent Personal Support of Human Behavior (SmartGuidance'17) aims to build on the success of the previous workshop (namely Smarticipation) organized in conjunction with UbiComp 2016, to continue discussing the latest research outcomes of anticipatory mobile systems. We invite the submission of papers within this emerging, interdisciplinary research field of anticipatory mobile computing that focuses on understanding, design, and development of such ubiquitous systems. We also welcome contributions that investigate human behaviors, underlying recognition and prediction models; conduct field studies; as well as propose novel HCI techniques to provide personal support. All workshop contributions will be published in supplemental proceedings of the UbiComp 2017 conference and included in the ACM Digital Library.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2017
Keywords
anticipatory mobile computing, personal assistance, mobile sensing, pervasive environment, ubiquitous devices
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-146247 (URN)10.1145/3123024.3124457 (DOI)000426932500127 ()
Conference
ACM International Joint Conference on Pervasive and Ubiquitous Computing (UBICOMP) / ACM International Symposium on Wearable Computers (ISWC), Maui, Hawaii, SEP 11-15, 2017
Available from: 2018-05-17 Created: 2018-05-17 Last updated: 2018-06-09Bibliographically approved
Quan, Z., Rehman, S. U., Yu, Z., Xin, W., Lei, W. & Baoyu, Z. (2016). Face Recognition Using Dense SIFT Feature Alignment. Chinese journal of electronics, 25(6), 1034-1039
Open this publication in new window or tab >>Face Recognition Using Dense SIFT Feature Alignment
Show others...
2016 (English)In: Chinese journal of electronics, ISSN 1022-4653, E-ISSN 2075-5597, Vol. 25, no 6, p. 1034-1039Article in journal (Refereed) Published
Abstract [en]

This paper addresses face recognition problem in a more challenging scenario where the training and test samples are both subject to the visual variations of poses, expressions and misalignments. We employ dense Scale-invariant feature transform (SIFT) feature matching as a generic transformation to roughly align training samples; and then identify input facial images via an improved sparse representation model based on the aligned training samples. Compared with previous methods, the extensive experimental results demonstrate the effectiveness of our method for the task of face recognition on three benchmark datasets.

Keywords
Face recognition, Dense SIFT feature alignment, Sparse representation
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:umu:diva-129909 (URN)10.1049/cje.2016.10.001 (DOI)000387735900007 ()
Available from: 2017-01-13 Created: 2017-01-10 Last updated: 2018-06-09Bibliographically approved
Khan, M. S., ur Réhman, S., Söderström, U., Halawani, A. & Li, H. (2016). Face-off: a Face Reconstruction Technique for Virtual Reality (VR) Scenarios. In: Hua G., Jégou H. (Ed.), Computer Vision: ECCV 2016 Workshops. Paper presented at 14th European Conference on Computer Vision, ECCV 2016, Amsterdam, The Netherlands, 8-16 October, 2016 (pp. 490-503). Springer, 9913
Open this publication in new window or tab >>Face-off: a Face Reconstruction Technique for Virtual Reality (VR) Scenarios
Show others...
2016 (English)In: Computer Vision: ECCV 2016 Workshops / [ed] Hua G., Jégou H., Springer, 2016, Vol. 9913, p. 490-503Conference paper, Published paper (Refereed)
Abstract [en]

Virtual Reality (VR) headsets occlude a significant portion of human face. The real human face is required in many VR applications, for example, video teleconferencing. This paper proposes a wearable camera setup-based solution to reconstruct the real face of a person wearing VR headset. Our solution lies in the core of asymmetrical principal component analysis (aPCA). A user-specific training model is built using aPCA with full face, lips and eye region information. During testing phase, lower face region and partial eye information is used to reconstruct the wearer face. Online testing session consists of two phases, i) calibration phase and ii) reconstruction phase. In former, a small calibration step is performed to align test information with training data, while the later uses half face information to reconstruct the full face using aPCA-based trained-data. The proposed approach is validated with qualitative and quantitative analysis.

Place, publisher, year, edition, pages
Springer, 2016
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords
Virtual Reality, VR headset, Face reconstruction, PCA, wearable setup, Oculus
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Computer Systems Signal Processing
Research subject
Computer Science; Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-138277 (URN)10.1007/978-3-319-46604-0_35 (DOI)2-s2.0-84989829345 (Scopus ID)978-3-319-46603-3 (ISBN)978-3-319-46604-0 (ISBN)
Conference
14th European Conference on Computer Vision, ECCV 2016, Amsterdam, The Netherlands, 8-16 October, 2016
Available from: 2017-08-16 Created: 2017-08-16 Last updated: 2018-10-01Bibliographically approved
Khan, M. S., Li, H. & ur Réhman, S. (2016). Gaze perception and awareness in smart devices. International journal of human-computer studies, 92-93, 55-65
Open this publication in new window or tab >>Gaze perception and awareness in smart devices
2016 (English)In: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 92-93, p. 55-65Article in journal (Refereed) Published
Abstract [en]

Eye contact and gaze awareness play a significant role for conveying emotions and intentions duringface-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However,the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the(i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen isunable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers haveproposed different hardware setups with complex software algorithms. The most recent solution foraccurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, todaycommonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is aneed to improve gaze awareness/perception in these smart devices. In this work, we have revisited thequestion; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis isthat ‘an accurate gaze perception can be achieved by the ‘3D embodiment’ of a remote user's head gestureduring video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3Dembodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smartdevice (tablet PC). The electromechanical platform in combination with a smart device is a novel setupthat is used for studying gaze awareness/perception in 2D screen-based smart devices during videoteleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-LisaGaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii)‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between theobserving person and the object by an actor. Our results confirm that the 3D embodiment of a remoteuser head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness,hence, accurately projecting the human gaze in distant space.

Place, publisher, year, edition, pages
Elsevier, 2016
Keywords
Mona-Lisa gaze effect, gaze awareness, computer-mediated communication, eye contact, head gesture, gaze faithfulness, embodied telepresence system, tablet PC, HCI
National Category
Interaction Technologies Robotics Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-108568 (URN)10.1016/j.ijhcs.2016.05.002 (DOI)000379367900005 ()
Available from: 2015-09-14 Created: 2015-09-14 Last updated: 2018-06-07Bibliographically approved
Organisations

Search in DiVA

Show all publications