umu.sePublikasjoner
Endre søk
Begrens søket
1 - 14 of 14
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Alaa, Halawani
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Haibo, Li
    School of Computer Science & Communication, Royal Institute of Technology (KTH), Stockholm, Sweden.
    Template-based Search: A Tool for Scene Analysis2016Inngår i: 12th IEEE International Colloquium on Signal Processing & its Applications (CSPA): Proceeding, IEEE, 2016, artikkel-id 7515772Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper proposes a simple and yet effective technique for shape-based scene analysis, in which detection and/or tracking of specific objects or structures in the image is desirable. The idea is based on using predefined binary templates of the structures to be located in the image. The template is matched to contours in a given edge image to locate the designated entity. These templates are allowed to deform in order to deal with variations in the structure's shape and size. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust results in cluttered and noisy scenes in the applications presented.

  • 2.
    Halawani, Alaa
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Computer Engineering Department, Palestine Polytechnic University, Hebron, Palestine.
    Li, Haibo
    KTH.
    100 lines of code for shape-based object localization2016Inngår i: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 60, s. 458-472Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We introduce a simple and effective concept for localizing objects in densely cluttered edge images based on shape information. The shape information is characterized by a binary template of the object's contour, provided to search for object instances in the image. We adopt a segment-based search strategy, in which the template is divided into a set of segments. In this work, we propose our own segment representation that we callone-pixel segment (OPS), in which each pixel in the template is treated as a separate segment. This is done to achieve high flexibility that is required to account for intra-class variations. OPS representation can also handle scale changes effectively. A dynamic programming algorithm uses the OPS representation to realize the search process, enabling a detailed localization of the object boundaries in the image. The concept's simplicity is reflected in the ease of implementation, as the paper's title suggests. The algorithm works directly with very noisy edge images extracted using the Canny edge detector, without the need for any preprocessing or learning steps. We present our experiments and show that our results outperform those of very powerful, state-of-the-art algorithms.

  • 3.
    Halawani, Alaa
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    School of Computer Science & Communication, Royal Institute of Technology (KTH), Sweden.
    FingerInk: Turn your Glass into a Digital Board2013Inngår i: Proceedings of the 25th OzCHI conference: Augmentation, Application, Innovation, Collaboration, ACM Digital Library, 2013, s. 393-396Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a robust vision-based technology for hand and finger detection and tracking that can be used in many CHI scenarios. The method can be used in real-life setups and does not assume any predefined conditions. Moreover, it does not require any additional expensive hardware. It fits well into user's environment without major changes and hence can be used in ambient intelligence paradigm. Another contribution is the interaction using glass which is a natural, yet challenging environment to interact with. We introduce the concept of ``invisible information layer" embedded into normal window glass that is used as an interaction medium thereafter.

  • 4.
    Halawani, Alaa
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Human Ear Localization: A Template-based Approach2015Konferansepaper (Annet vitenskapelig)
    Abstract [en]

    We propose a simple and yet effective technique for shape-based ear localization. The idea is based on using a predefined binary ear template that is matched to ear contours in a given edge image. To cope with changes in ear shapes and sizes, the template is allowed to deform. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust localization results in various cluttered and noisy setups.

  • 5.
    Halawani, Alaa
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Personal Relative Time: Towards Internet of Watches2011Inngår i: 2011 IEEE International Conferences on Internet of Things and Cyber, Physical and Social Computing, Los Alamitos: IEEE Computer Society, 2011, s. 678-682Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We introduce an idea for connecting timekeeping devices through the Internet, aiming at assigning people their individual personal time to loosen the strict rule of time synchronization that, in many cases, causes problems in access of available resources. Information about these resources, users, and their plans are utilized to accomplish the task. Time scheduling to assign users their individual time and readjustment of their timekeeping devices is done implicitly so that they do not feel any abnormal changes during their day. This will lead to a nonlinear relationship between real (absolute) time and personal time. We explain the concept, give examples, and suggest a framework for the system.

  • 6.
    Halawani, Alaa
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Anani, Adi
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Building eye contact in e-learning through head-eye coordination2011Inngår i: International Journal of Social Robotics, ISSN 1875-4791, Vol. 3, nr 1, s. 95-106Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Video conferencing is a very effective tool to use for e-learning. Most of the available video conferencing systems suffer a main drawback represented by the lack of eye contact between participants. In this paper we present a new scheme for building eye contact in e-learning sessions. The scheme assumes a video conferencing session with “one teacher many students” arrangement. In our system, eye contact is achieved without the need for any gaze estimation technique. Instead, we “generate the gaze” by allowing the user communicate his visual attention to the system through head-eye coordination. To enable real time and precise headeye coordination, a head motion tracking technique is required. Unlike traditional head tracking systems, our procedure suggests mounting the camera on the user’s head rather than in front of it. This configuration achieves much better resolution and thus leads to better tracking results. Promising results obtained from both demo and real time experiments demonstrate the effectiveness and efficiency of the proposed scheme. Although this paper concentrates on elearning, the proposed concept can be easily extended to the world of interaction with social robotics, in which introducing eye contact between humans and robots would be of great advantage.

  • 7.
    Halawani, Alaa
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Active Vision for Tremor Disease Monitoring2015Inngår i: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences AHFE 2015, 2015, Vol. 3, s. 2042-2048Konferansepaper (Fagfellevurdert)
    Abstract [en]

    The aim of this work is to introduce a prototype for monitoring tremor diseases using computer vision techniques.  While vision has been previously used for this purpose, the system we are introducing differs intrinsically from other traditional systems. The essential difference is characterized by the placement of the camera on the user’s body rather than in front of it, and thus reversing the whole process of motion estimation. This is called active motion tracking. Active vision is simpler in setup and achieves more accurate results compared to traditional arrangements, which we refer to as “passive” here. One main advantage of active tracking is its ability to detect even tiny motions using its simple setup, and that makes it very suitable for monitoring tremor disorders. 

  • 8.
    Halawani, Alaa
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Anani, Adi
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Active vision for controlling an electric wheelchair2012Inngår i: Intelligent Service Robotics, ISSN 1861-2776, Vol. 5, nr 2, s. 89-98Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Most of the electric wheelchairs available in the market are joystick-driven and therefore assume that the user is able to use his hand motion to steer the wheelchair. This does not apply to many users that are only capable of moving the head like quadriplegia patients. This paper presents a vision-based head motion tracking system to enable such patients of controlling the wheelchair. The novel approach that we suggest is to use active vision rather than passive to achieve head motion tracking. In active vision-based tracking, the camera is placed on the user’s head rather than in front of it. This makes tracking easier, more accurate and enhances the resolution. This is demonstrated theoretically and experimentally. The proposed tracking scheme is then used successfully to control our electric wheelchair to navigate in a real world environment.

  • 9.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Computer Engineering Department, Palestine Polytechnic University, Hebron 90100, Palestine.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Action Augmented Real Virtuality Design for Presence2018Inngår i: IEEE Transactions on Cognitive and Developmental Systems, ISSN 2379-8920, Vol. 10, nr 4, s. 961-972Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper addresses the important question of how to design a video teleconferencing setup to increase the experience of spatial and social presence. Traditional video teleconferencing setups are lacking in presenting the nonverbal behaviors that humans express in face-to-face communication, which results in decrease in presence-experience. In order to address this issue, we first present a conceptual framework of presence for video teleconferencing. We introduce a modern presence concept called real virtuality and propose a new way of achieving this based on body or artifact actions to increase the feeling of presence, and we named this concept presence through actions. Using this new concept, we present the design of a novel action-augmented real virtuality prototype that considers the challenges related to the design of an action prototype, action embodiment, and face representation. Our action prototype is a telepresence mechatronic robot (TEBoT), and action embodiment is through a head-mounted display (HMD). The face representation solves the problem of face occlusion introduced by the HMD. The novel combination of HMD, TEBoT, and face representation algorithm has been tested in a real video teleconferencing scenario for its ability to solve the challenges related to spatial and social presence. We have performed a user study where the invited participants were requested to experience our novel setup and to compare it with a traditional video teleconferencing setup. The results show that the action capabilities not only increase the feeling of spatial presence but also increase the feeling of social presence of a remote person among local collaborators.

  • 10.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. University of East London, London, England.
    Söderström, Ulrik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Face-off: a Face Reconstruction Technique for Virtual Reality (VR) Scenarios2016Inngår i: Computer Vision: ECCV 2016 Workshops / [ed] Hua G., Jégou H., Springer, 2016, Vol. 9913, s. 490-503Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Virtual Reality (VR) headsets occlude a significant portion of human face. The real human face is required in many VR applications, for example, video teleconferencing. This paper proposes a wearable camera setup-based solution to reconstruct the real face of a person wearing VR headset. Our solution lies in the core of asymmetrical principal component analysis (aPCA). A user-specific training model is built using aPCA with full face, lips and eye region information. During testing phase, lower face region and partial eye information is used to reconstruct the wearer face. Online testing session consists of two phases, i) calibration phase and ii) reconstruction phase. In former, a small calibration step is performed to align test information with training data, while the later uses half face information to reconstruct the full face using aPCA-based trained-data. The proposed approach is validated with qualitative and quantitative analysis.

  • 11.
    Li, Bo
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Söderström, Ulrik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Scale & rotation-invariant matching with curve chainManuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    This paper presents a new methodology that matches image geometry using a curve chain. A curve chain is defined as a 1-dimensional arrangement of curves. The idea is to match images without using local descriptors and apply this concept into applications. This paper have two contributions. First, we present a novel curve feature which is scale & rotation – invariant. Secondly, we present an efficient scale & rotational-invariant matching method which matches curve chains in the scene. The efficacy is benefited by three factors. Firstly, matching a 1-dimensional curve chain can achieve quadratic operations when dynamic programming is used.  Secondly, curves are salient features that naturally reduce the dimensionality compared with scanning all possible locations. Thirdly, curves provide stable relational cues between neighbouring curves. Such stable relational cues reduce the computation to linear operations by avoiding searching all combinations of curves in dynamic programming. The advantages of the method has good potential to benefit application including point correspondence matching, object detection, etc.  In point correspondence experiments our method yields a good total matching score on various image transformations. At the same time, the proposed method shows good potential of matching non-rigid object such as faces with scale & rotation invariance.

  • 12.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Chinese Academy of Science, China.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Palestine Polytechnic University.
    Feng, Shengzhong
    Chinese Academy of Science, China.
    Li, Haibo
    Royal Institute of Technology, Stockholm, Sweden.
    Ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Multimodal Hand and Foot Gesture Interaction for Handheld Devices2014Inngår i: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), ISSN 1551-6857, E-ISSN 1551-6865, Vol. 11, nr 1, artikkel-id 10Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a hand-and-foot-based multimodal interaction approach for handheld devices. Our method combines input modalities (i.e., hand and foot) and provides a coordinated output to both modalities along with audio and video. Human foot gesture is detected and tracked using contour-based template detection (CTD) and Tracking-Learning-Detection (TLD) algorithm. 3D foot pose is estimated from passive homography matrix of the camera. 3D stereoscopic and vibrotactile are used to enhance the immersive feeling. We developed a multimodal football game based on the multimodal approach as a proof-of-concept. We confirm our systems user satisfaction through a user study.

  • 13.
    Lu, Zhihan
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Khan, Muhammad Sikandar Lal
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    ur Rehman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    KTH.
    Finger in air: touch-less interaction on smartphone2013Inngår i: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia2013, Luleå, Sweden, luleå, sweden, 2013Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we present a vision based intuitive interactionmethod for smart mobile devices. It is based on markerlessfinger gesture detection which attempts to provide a ‘naturaluser interface’. There is no additional hardware necessaryfor real-time finger gesture estimation. To evaluate thestrengths and effectiveness of proposed method, we designtwo smart phone applications, namely circle menu application- provides user with graphics and smart phone’s statusinformation, and bouncing ball game- a finger gesture basedbouncing ball application. The users interact with these applicationsusing finger gestures through the smart phone’scamera view, which trigger the interaction event and generateactivity sequences for interactive buffers. Our preliminaryuser study evaluation demonstrates effectiveness andthe social acceptability of proposed interaction approach.

  • 14. Lv, Zhihan
    et al.
    Halawani, Alaa
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik. Computer Engineering and Science Department, Palestine Polytechnic University, Hebron, Palestine.
    Feng, Shengzhong
    ur Réhman, Shafiq
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för tillämpad fysik och elektronik.
    Li, Haibo
    Touch-less interactive augmented reality game on vision-based wearable device2015Inngår i: Personal and Ubiquitous Computing, ISSN 1617-4909, E-ISSN 1617-4917, Vol. 19, nr 3-4, s. 551-567Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    There is an increasing interest in creating pervasive games based on emerging interaction technologies. In order to develop touch-less, interactive and augmented reality games on vision-based wearable device, a touch-less motion interaction technology is designed and evaluated in this work. Users interact with the augmented reality games with dynamic hands/feet gestures in front of the camera, which triggers the interaction event to interact with the virtual object in the scene. Three primitive augmented reality games with eleven dynamic gestures are developed based on the proposed touch-less interaction technology as proof. At last, a comparing evaluation is proposed to demonstrate the social acceptability and usability of the touch-less approach, running on a hybrid wearable framework or with Google Glass, as well as workload assessment, user’s emotions and satisfaction.

1 - 14 of 14
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf