umu.sePublications
Change search
Refine search result
123 1 - 50 of 128
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abedan Kondori, Farid
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Yousefi, Shahrouz
    KTH Royal Institute of Technology, Department of Media Technology and Interaction Design.
    Kouma, Jean-Paul
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Liu, Li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH Royal Institute of Technology, Department of Media Technology and Interaction Design.
    Direct hand pose estimation for immersive gestural interaction2015In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 66, p. 91-99Article in journal (Refereed)
    Abstract [en]

    This paper presents a novel approach for performing intuitive gesture based interaction using depth data acquired by Kinect. The main challenge to enable immersive gestural interaction is dynamic gesture recognition. This problem can be formulated as a combination of two tasks; gesture recognition and gesture pose estimation. Incorporation of fast and robust pose estimation method would lessen the burden to a great extent. In this paper we propose a direct method for real-time hand pose estimation. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Extensive experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation On two different setups; desktop computing, and mobile platform. This reveals the system capability to accommodate different interaction procedures. In addition, a user study is conducted to evaluate learnability, user experience and interaction quality in 3D gestural interaction in comparison to 2D touchscreen interaction.

  • 2.
    Abedan Kondori, Farid
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Yousefi, Shahrouz
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Direct three-dimensional head pose estimation from Kinect-type sensors2014In: Electronics Letters, ISSN 0013-5194, E-ISSN 1350-911X, Vol. 50, no 4, p. 268-270Article in journal (Refereed)
    Abstract [en]

    A direct method for recovering three-dimensional (3D) head motion parameters from a sequence of range images acquired by Kinect sensors is presented. Based on the range images, a new version of the optical flow constraint equation is derived, which can be used to directly estimate 3D motion parameters without any need of imposing other constraints. Since all calculations with the new constraint equation are based on the range images, Z(xyt), the existing techniques and experiences developed and accumulated on the topic of motion from optical flow can be directly applied simply by treating the range images as normal intensity images I(xyt). In this reported work, it is demonstrated how to employ the new optical flow constraint equation to recover the 3D motion of a moving head from the sequences of range images, and furthermore, how to use an old trick to handle the case when the optical flow is large. It is shown, in the end, that the performance of the proposed approach is comparable with that of some of the state-of-the-art approaches that use range data to recover 3D motion parameters.

  • 3.
    Abedan Kondori, Farid
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Yousefi, Shahrouz
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Gesture Tracking for 3D Interaction in Augmented Environments2011Conference paper (Other academic)
  • 4.
    Abedan Kondori, Farid
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Yousefi, Shahrouz
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Real 3D Interaction Behind Mobile Phones for Augmented Environments2011In: 2011 IEEE International Conference on Multimedia and Expo (ICME), IEEE conference proceedings, 2011, p. 1-6Conference paper (Refereed)
    Abstract [en]

    Number of mobile devices such as mobile phones or PDAs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays which make the interaction with device easier and more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera's field of view. This paper suggests the use of particular patterns from local orientation of the image called Rotational Symmetries to detect and localize human gesture. Relative rotation and translation of human gesture between consecutive frames are estimated by means of extracting stable features. Consequently, this information can be used to facilitate the 3D manipulation of virtual objects in various applications in mobile devices.

  • 5.
    Abedan Kondori, Farid
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Yousefi, Shahrouz
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Sonning, Samuel
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Sonning, Sabina
    3D Head Pose Estimation Using the Kinect2011Conference paper (Refereed)
    Abstract [en]

    Head pose estimation plays an essential role for bridging the information gap between humans and computers. Conventional head pose estimation methods are mostly done in images captured by cameras. However accurate and robust pose estimation is often problematic. In this paper we present an algorithm for recovering the six degrees of freedom (DOF) of motion of a head from a sequence of range images taken by the Microsoft Kinectfor Xbox 360. The proposed algorithm utilizes a least-squares minimization of the difference between themeasured rate of change of depth at a point and the rate predicted by the depth rate constraint equation. We segment the human head from its surroundings and background, and then we estimate the head motion. Our system has the capability to recover the six DOF of the head motion of multiple people in one image. Theproposed system is evaluated in our lab and presents superior results.

  • 6.
    Abedan Kondori, Farid
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Yousefi, Shahrouz
    KTH Royal Institute of Technology, Department of Media Technology and Interaction Design.
    Liu, Li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH Royal Institute of Technology, Department of Media Technology and Interaction Design.
    Head operated electric wheelchair2014In: IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI 2014), IEEE , 2014, p. 53-56Conference paper (Refereed)
    Abstract [en]

    Currently, the most common way to control an electric wheelchair is to use joystick. However, there are some individuals unable to operate joystick-driven electric wheelchairs due to sever physical disabilities, like quadriplegia patients. This paper proposes a novel head pose estimation method to assist such patients. Head motion parameters are employed to control and drive an electric wheelchair. We introduce a direct method for estimating user head motion, based on a sequence of range images captured by Kinect. In this work, we derive new version of the optical flow constraint equation for range images. We show how the new equation can be used to estimate head motion directly. Experimental results reveal that the proposed system works with high accuracy in real-time. We also show simulation results for navigating the electric wheelchair by recovering user head motion.

  • 7.
    Abedan Kondori, Farid
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Yousefi, Shahrouz
    Ostovar, Ahmad
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Liu, Li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    A Direct Method for 3D Hand Pose Recovery2014In: 22nd International Conference on Pattern Recognition, 2014, p. 345-350Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel approach for performing intuitive 3D gesture-based interaction using depth data acquired by Kinect. Unlike current depth-based systems that focus only on classical gesture recognition problem, we also consider 3D gesture pose estimation for creating immersive gestural interaction. In this paper, we formulate gesture-based interaction system as a combination of two separate problems, gesture recognition and gesture pose estimation. We focus on the second problem and propose a direct method for recovering hand motion parameters. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Our experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation. This application is intended to explore the system capabilities in real-time biomedical applications. Eventually, system usability test is conducted to evaluate the learnability, user experience and interaction quality in 3D interaction in comparison to 2D touch-screen interaction.

  • 8.
    Alaa, Halawani
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Haibo, Li
    School of Computer Science & Communication, Royal Institute of Technology (KTH), Stockholm, Sweden.
    Template-based Search: A Tool for Scene Analysis2016In: 12th IEEE International Colloquium on Signal Processing & its Applications (CSPA): Proceeding, IEEE, 2016, article id 7515772Conference paper (Refereed)
    Abstract [en]

    This paper proposes a simple and yet effective technique for shape-based scene analysis, in which detection and/or tracking of specific objects or structures in the image is desirable. The idea is based on using predefined binary templates of the structures to be located in the image. The template is matched to contours in a given edge image to locate the designated entity. These templates are allowed to deform in order to deal with variations in the structure's shape and size. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust results in cluttered and noisy scenes in the applications presented.

  • 9.
    Anani, Adi
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Diagnostic instrument for children with reading disorders2006In: International Journal of Scientific Research, ISSN 1021-0806, Vol. 16, p. 167-170Article in journal (Refereed)
    Abstract [en]

    A simple and cost-effective wearable gaze tracking system is designed to observe the readingpattern of patients with reading disorders in order to facilitate for the work of ophthalmologists andthe multidisciplinary treating teams in making reliable diagnosis. The system constitutes of twominiaturized cameras mounted on a headset; one for eye tracking and one for the scene. The eyetracking information is combined with information extracted from the picture of the forwardlooking camera to online identify the gaze point. When reading a text the gaze point moves and areading pattern is created.

  • 10.
    Anani, Adi
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Zhang, Deng-yin
    Institute of Signal Processing and Transmission, Nanjing University of Posts and Telecommunications, China.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    QoS-guaranteed packet scheduling in wireless networks2009In: The Journal of China Universities of Posts and Telecommunications, ISSN 1005-8885, Vol. 16, no 2, p. 63-67Article in journal (Refereed)
    Abstract [en]

    To guarantee the quality of service (QoS) of a wireless network, a new packet scheduling algorithm using cross-layer design technique is proposed in this article. First, the demand of packet scheduling for multimedia transmission in wireless networks and the deficiency of the existing packet scheduling algorithms are analyzed. Then the model of the QoS-guaranteed packet scheduling (QPS) algorithm of high speed downlink packet access (HSDPA) and the cost function of packet transmission are designed. The calculation method of packet delay time for wireless channels is expounded in detail, and complete steps to realize the QPS algorithm are also given. The simulation results show that the QPS algorithm that provides the scheduling sequence of packets with calculated values can effectively improve the performance of delay and throughput.

  • 11.
    Cheng, Xiaogang
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, China; School of Electrical Engineering and Computer Science, Royal Institute of Technology, Stockholm, Sweden.
    Yang, Bin
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. School of Environmental and Municipal Engineering, Xi’an University of Architecture and Technology, Xi'an, China.
    Liu, Guoqing
    Olofsson, Thomas
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    A total bounded variation approach to low visibility estimation on expressways2018In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 18, no 2, article id 392Article in journal (Refereed)
    Abstract [en]

    Low visibility on expressways caused by heavy fog and haze is a main reason for traffic accidents. Real-time estimation of atmospheric visibility is an effective way to reduce traffic accident rates. With the development of computer technology, estimating atmospheric visibility via computer vision becomes a research focus. However, the estimation accuracy should be enhanced since fog and haze are complex and time-varying. In this paper, a total bounded variation (TBV) approach to estimate low visibility (less than 300 m) is introduced. Surveillance images of fog and haze are processed as blurred images (pseudo-blurred images), while the surveillance images at selected road points on sunny days are handled as clear images, when considering fog and haze as noise superimposed on the clear images. By combining image spectrum and TBV, the features of foggy and hazy images can be extracted. The extraction results are compared with features of images on sunny days. Firstly, the low visibility surveillance images can be filtered out according to spectrum features of foggy and hazy images. For foggy and hazy images with visibility less than 300 m, the high-frequency coefficient ratio of Fourier (discrete cosine) transform is less than 20%, while the low-frequency coefficient ratio is between 100% and 120%. Secondly, the relationship between TBV and real visibility is established based on machine learning and piecewise stationary time series analysis. The established piecewise function can be used for visibility estimation. Finally, the visibility estimation approach proposed is validated based on real surveillance video data. The validation results are compared with the results of image contrast model. Besides, the big video data are collected from the Tongqi expressway, Jiangsu, China. A total of 1,782,000 frames were used and the relative errors of the approach proposed are less than 10%.

  • 12.
    Darvish, Ali Mohammed
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Söderström, Ulrik
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Super-resolution facial images from single input images based on discrete wavelet transform2014In: 22nd International Conference on Pattern Recognition, 2014, p. 843-848Conference paper (Refereed)
    Abstract [en]

    In this work, we are presenting a technique that allows for accurate estimation of frequencies in higher dimensions than the original image content. This technique uses asymmetrical Principal Component Analysis together with Discrete Wavelet Transform (aPCA-DWT). For example, high quality content can be generated from low quality cameras since the necessary frequencies can be estimated through reliable methods. Within our research, we build models for interpreting facial images where super-resolution versions of human faces can be created. We have worked on several different experiments, extracting the frequency content in order to create models with aPCA-DWT. The results are presented along with experiments of deblurring and zooming beyond the original image resolution. For example, when an image is enlarged 16 times in decoding, the proposed technique outperforms interpolation with more than 7 dB on average.

  • 13.
    Fahlquist, Karin
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Karlsson, Johannes
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Ren, Keni
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Liu, Li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Ur-Rehman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Wark, Tim
    CSIRO.
    Human animal machine interaction: Animal behavior awareness and digital experience2010In: Proceedings of ACM Multimedia 2010 - Brave New Ideas, 25-29 October 2010, Firenze, Italy., 2010, p. 1269-1274Conference paper (Refereed)
    Abstract [en]

    This paper proposes an intuitive wireless sensor/actuator based communication network for human animal interaction for a digital zoo. In order to enhance effective observation and control over wild life, we have built a wireless sensor network. 25 video transmitting nodes are installed for animal behavior observation and experimental vibrotactile collars have been designed for effective control in an animal park.

    The goal of our research is two-folded. Firstly, to provide an interaction between digital users and animals, and monitor the animal behavior for safety purposes. Secondly, we investigate how animals can be controlled or trained based on vibrotactile stimuli instead of electric stimuli.

    We have designed a multimedia sensor network for human animal machine interaction. We have evaluated the effect of human animal machine state communication model in field experiments.

  • 14.
    Georgakis, Apostolos
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Le, Hung-son
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kouma, Jean-Paul
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Cabral, Regis
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    WWW.WAWO.NET: a CBIR for facial similarities2007In: Proceedings SSBA 2007, 2007, p. 125-128Conference paper (Refereed)
  • 15.
    Georgakis, Apostolos
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Content based image retrieval using a bootstrapped SOM network2006In: 3rd Int. Sym. on Neural Networks (ISNN'06), pp. 595-601, Chengdu, China, May 2006, Heidelberg: Springer Berlin , 2006, p. 595-601Conference paper (Refereed)
  • 16.
    Georgakis, Apostolos
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Document distances using the Zipf distribution and a novel metric2003Report (Other academic)
    Abstract [en]

    A novel metric is proposed in the present report for the evaluation of the goodness-of-fit criterion between the distribution functions of two samples. We extend the usage of the proposed criterion for the case of the generalized Zipf distribution. Detailed mathematical analysis of the proposed metric, which is embodied in a hypothesis testing, is also provided.

  • 17.
    Georgakis, Apostolos
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    User behaviour modeling and content based speculative web page retrieval2006In: Data & Knowledge Engineering, ISSN 0169-023X, E-ISSN 1872-6933, Vol. 59, no 3, p. 770-788Article in journal (Refereed)
    Abstract [en]

    This paper provides a transparent and speculative algorithm for content based web page prefetching. The algorithm relies on a profile based on the Internet browsing habits of the user. It aims at reducing the perceived latency when the user requests a document by clicking on a hyperlink. The proposed user profile relies on the frequency of occurrence for selected elements forming the web pages visited by the user. These frequencies are employed in a mechanism for the prediction of the user’s future actions. For the anticipation of an adjacent action, the anchored text around each of the outbound links is used and weights are assigned to these links. Some of the linked documents are then prefetched and stored in a local cache according to the assigned weights. The proposed algorithm was tested against three different prefetching algorithms and yield improved cache–hit rates given a moderate bandwidth overhead. Furthermore, the precision of accurately inferring the user’s preference is evaluated through the recall–precision curves. Statistical evaluation testifies that the achieved recall–precision performance improvement is significant.

  • 18.
    Georgakis, Apostolos
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Web prefetching through automatic categorization2004Report (Other academic)
    Abstract [en]

    The present report provides a novel transparent and speculative algorithm for content based web page prefetching. The proposed algorithm relies on a user profile that is dynamically generated when the user is browsing the Internet and is updated over time. The objective is to reduce the user perceived latency by anticipating future actions. In doing so the adaboost algorithm is used in order to automatically annotate the outbound links of a page to a predefined set of “labels”. Afterwards, the links that correspond to labels relevant to the user’s preferences are pre-fetched in an effort to reduce the perceived latency when the user is surfing the Internet. A comparison between the proposed algorithm against two other pre-fetching algorithms yield improved cache-hit rates given a moderate bandwidth overhead.

  • 19.
    Georgakis, Apostolos
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Gordan, Mihaela
    An ensemble of SOM networks for document organization and retrieval2006In: Int. Conf. on Adaptive Knowledge Representation and Reasoning, pp. 141-147, Espoo, Finland, June 2006, 2006, p. 141-147Conference paper (Refereed)
  • 20.
    Georgakis, Apostolos
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Gordan, Mihaela
    Technical University of Cluj-Napoca, Cluj-Napoca, Romania .
    Behavior modeling using bigram frequencies for client-side link prefetching2006In: IASTED Int. Conf. on Internet and Multimedia Systems and Applications (EuroIMSA'06), Innsbruck, Austria, February 2006, 2006, p. 41-46Conference paper (Refereed)
  • 21.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    FingerInk: Turn your Glass into a Digital Board2013In: Proceedings of the 25th OzCHI conference., 2013, p. 393-396Conference paper (Refereed)
    Abstract [en]

    We present a robust vision-based technology for hand and finger detection and tracking that can be used in many CHI scenarios. The method can be used in real-life setups and does not assume any predefined conditions. Moreover, it does not require any additional expensive hardware. It fits well into user's environment without major changes and hence can be used in ambient intelligence paradigm. Another contribution is the interaction using glass which is a natural, yet challenging environment to interact with. We introduce the concept of ``invisible information layer" embedded into normal window glass that is used as an interaction medium thereafter.

  • 22.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Human Ear Localization: A Template-based Approach2015Conference paper (Refereed)
    Abstract [en]

    We propose a simple and yet effective technique for shape-based ear localization. The idea is based on using a predefined binary ear template that is matched to ear contours in a given edge image. To cope with changes in ear shapes and sizes, the template is allowed to deform. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust localization results in various cluttered and noisy setups.

  • 23.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Personal Relative Time: Towards Internet of Watches2011In: 2011 IEEE International Conferences on Internet of Things and Cyber, Physical and Social Computing, Los Alamitos: IEEE Computer Society, 2011, p. 678-682Conference paper (Refereed)
    Abstract [en]

    We introduce an idea for connecting timekeeping devices through the Internet, aiming at assigning people their individual personal time to loosen the strict rule of time synchronization that, in many cases, causes problems in access of available resources. Information about these resources, users, and their plans are utilized to accomplish the task. Time scheduling to assign users their individual time and readjustment of their timekeeping devices is done implicitly so that they do not feel any abnormal changes during their day. This will lead to a nonlinear relationship between real (absolute) time and personal time. We explain the concept, give examples, and suggest a framework for the system.

  • 24.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Anani, Adi
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Building eye contact in e-learning through head-eye coordination2011In: International Journal of Social Robotics, ISSN 1875-4791, Vol. 3, no 1, p. 95-106Article in journal (Refereed)
    Abstract [en]

    Video conferencing is a very effective tool to use for e-learning. Most of the available video conferencing systems suffer a main drawback represented by the lack of eye contact between participants. In this paper we present a new scheme for building eye contact in e-learning sessions. The scheme assumes a video conferencing session with “one teacher many students” arrangement. In our system, eye contact is achieved without the need for any gaze estimation technique. Instead, we “generate the gaze” by allowing the user communicate his visual attention to the system through head-eye coordination. To enable real time and precise headeye coordination, a head motion tracking technique is required. Unlike traditional head tracking systems, our procedure suggests mounting the camera on the user’s head rather than in front of it. This configuration achieves much better resolution and thus leads to better tracking results. Promising results obtained from both demo and real time experiments demonstrate the effectiveness and efficiency of the proposed scheme. Although this paper concentrates on elearning, the proposed concept can be easily extended to the world of interaction with social robotics, in which introducing eye contact between humans and robots would be of great advantage.

  • 25.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Active Vision for Tremor Disease Monitoring2015In: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences AHFE 2015, 2015, Vol. 3, p. 2042-2048Conference paper (Refereed)
    Abstract [en]

    The aim of this work is to introduce a prototype for monitoring tremor diseases using computer vision techniques.  While vision has been previously used for this purpose, the system we are introducing differs intrinsically from other traditional systems. The essential difference is characterized by the placement of the camera on the user’s body rather than in front of it, and thus reversing the whole process of motion estimation. This is called active motion tracking. Active vision is simpler in setup and achieves more accurate results compared to traditional arrangements, which we refer to as “passive” here. One main advantage of active tracking is its ability to detect even tiny motions using its simple setup, and that makes it very suitable for monitoring tremor disorders. 

  • 26.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Anani, Adi
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Active vision for controlling an electric wheelchair2012In: Intelligent Service Robotics, ISSN 1861-2776, Vol. 5, no 2, p. 89-98Article in journal (Refereed)
    Abstract [en]

    Most of the electric wheelchairs available in the market are joystick-driven and therefore assume that the user is able to use his hand motion to steer the wheelchair. This does not apply to many users that are only capable of moving the head like quadriplegia patients. This paper presents a vision-based head motion tracking system to enable such patients of controlling the wheelchair. The novel approach that we suggest is to use active vision rather than passive to achieve head motion tracking. In active vision-based tracking, the camera is placed on the user’s head rather than in front of it. This makes tracking easier, more accurate and enhances the resolution. This is demonstrated theoretically and experimentally. The proposed tracking scheme is then used successfully to control our electric wheelchair to navigate in a real world environment.

  • 27.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Anani, Adi
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Enabling real-time video services over ad-hoc networks opens the gates for e-learning in areas lacking infrastructure2009In: International Journal of Interactive Mobile Technologies (iJIM),, ISSN 1865-7923, Vol. 3, no 4, p. 17-23Article in journal (Refereed)
    Abstract [en]

    In this paper we suggest a promising solution to come over the problems of delivering e-learning to areas with lack or deficiencies in infrastructure for Internet and mobile communication. We present a simple, reasonably priced and efficient communication platform for providing e-learning. This platform is based on wireless ad-hoc networks. We also present a preemptive routing protocol suitable for real-time video communication over wireless ad-hoc networks. Our results show that this routing protocol can significantly improve the quality of the received video. This makes our suggested system not only good to overcome the infrastructure barrier but even capable of delivering a high quality e-learning material.

  • 28.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Eriksson, Jerry
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    P2P video multicast for wireless mobile clients2006In: The Fourth International Conference on Mobile Systems, Applications, and Services (MobiSys 2006). 19-22 June 2006, Uppsala, Sweden., 2006Conference paper (Refereed)
  • 29.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Eriksson, Jerry
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Real-time video over wireless ad-hoc networks2005In: Conference on Computer Communications and Networks, 2005.: Proceedings 14th International Conference on (ICCCN 2005), 17-19 October 2005, San Diego, California, USA, IEEE , 2005, p. 596-596Conference paper (Refereed)
    Abstract [en]

    In this paper we investigate important issue for real-time video over wireless ad-hoc networks on different layers. Many error control methods for this approach use multiple streams and multipath routing. Thus the new proactive, link-state routing protocol have been developed, where the protocol finds the available route in the network and also it will not cause any interruption in the video traffic between the source and the destination. The open source MPEG-4 is also implemented to get the efficient video quality for the picture.

  • 30.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Eriksson, Jerry
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Real-Time Video Performance Using Preemptive Routing2006In: The Australian Telecommunication Networks and Applications Conference (ATNAC 2006). 4-6 December 2006, Melbourne, Australia, 2006Conference paper (Refereed)
  • 31.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Eriksson, Jerry
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Two hop connectivity for uniformed randomly distributed points in the unit square2006In: Proceedings of the First International Conference on Communications and Networking in China (CHINACOM 2006). 25-27 October 2006, Beijing, China., 2006Conference paper (Refereed)
    Abstract [en]

    Connectivity in ad-hoc networks is a fundamental, but to a large extend still unsolved problem. In this paper we consider the connectivity problem when a number of nodes are uniformly distributed within a unit square. We limit our problem to the one-hop and two-hop connectivity. For the one-hop connectivity we find the exact analytically solution. For the two-hop connectivity we find the lower and upper bound for connectivity.

  • 32.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Israelsson, Mikael
    Vimeo AB.
    Eriksson, Jerry
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Efficient P2P mobile service for live media streaming2006In: Proceedings of the Australian Telecommunication Networks and Applications Conference (ATNAC 2006). 4-6 December 2006, Melbourne, Australia., 2006Conference paper (Refereed)
    Abstract [en]

    Mobile TV is a new interesting area in the telecommunication industry. The technology for sending live video to mobile clients is characterized by relatively low CPU processing power, low network resources, and low display resolution. In this paper we discuss a solution to all of these problems by using application layer multicasting. This can significantly reduce the needed bitrate and required computing resources for each client. At the same time the received video quality is increased. Several different methods for splitting the video into substreams are discussed. Simulations for the local wireless ad-hoc network are performed. A system for application layer multicasting using layered H.264 is also presented.

  • 33.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Israelsson, Mikael
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Real-time video over wireless ad-hoc networks2004In: Proceedings, Symposium on Image Analysis, 2004, p. 106-109Conference paper (Refereed)
  • 34.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kouma, Jean-Paul
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Wark, Tim
    CSIRO.
    Corke, Peter
    CSIRO.
    Demonstration of Wyner-Ziv video compression in a wireless camera sensor network2009In: The 9th Scandinavian Workshop on Wireless Ad-hoc & Sensor Networks (ADHOC'09 ). 4-5 May 2009, Uppsala, Sweden., Uppsala, 2009Conference paper (Refereed)
    Abstract [en]

    Sending  video over wireless sensor networks is a challenging task. The encoding and transmission of video is very resource hungry and the sensor nodes have very limited resources in terms of communication bandwidth,memory, computation and  typically 5-10 times. In this paper we will present a practical implementation of a Wyner-Ziv video codec where the reversed asymmetry in complexity between encoder and decoder can be achieved. We will also present our sensor network platform used in this demonstration known as Fleck TM-3 as well as two different co-processor daughterboards for image processing. The different daughterboards are then compared in terms of speed and energy consumption.

  • 35.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kouma, Jean-Paul
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Wark, Tim
    CSIRO.
    Corke, Peter
    CSIRO.
    Poster Abstract: Distributed Video Coding for a Low-Bandwidth Wireless Camera Network2008In: The 5th European conference on Wireless Sensor Networks (EWSN 2008). 30 January - 1 February 2008, Bologna, Italy., 2008Conference paper (Refereed)
  • 36.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kouma, Jean-Paul
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Wark, Tim
    Corke, Peter I.
    Demonstration of Wyner-Ziv video compression in a wireless camera sensor network2009In: The Sixth Swedish National Computer Networking Workshop and Ninth Scandinavian Workshop on Wireless Adhoc Networks (SNCNW+Adhoc 2009), 2009Conference paper (Other academic)
    Abstract [en]

    Wyner-Ziv video coding can provide low complexity encoding and high complexity decoding and is therefore a promising approach for video coding in wireless sensor networks. We will demonstrate our practical implementation of a wyner-ziv video codec. The hardware platform used in our camera sensor network is the Fleck camera developed by CSIRO ASL in Brisbane, Australia.

  • 37.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Moving motion estimation from encoder for low-complexity video sensorsManuscript (preprint) (Other academic)
    Abstract [en]

    In this paper we present an approach to provide efficient low-complexity video encoding for wireless sensor networks. We present an method based on removing the most time-consuming task, that is motion estimation, from the encoder. Instead the decoder will perform motion prediction based on the available decoded frame and send the predicted motion vectors to the encoder. We present results based on a modified H.264 implementation. Our results shows that this approach can provide rather good coding efficiency even for relatively high network delays.

  • 38.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    P2P video multicast for wireless mobile clients2006In: Proceedings SSBA 2006 / [ed] Fredrik Georgsson, Niclas Börlin, 2006, p. 85-88Conference paper (Refereed)
  • 39.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Ren, Keni
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Tracking and identification of animals for a digital zoo2010In: Proceedings of the 1st IEEE/ACM Internet of Things Symposium, 18-20 December 2010, Hangzhou, China., 2010Conference paper (Refereed)
    Abstract [en]

    In this paper we present our approach to use a combination of radio frequency identification (RFID) and a wireless camera sensor network to identify and track animals at a zoo. We have developed and installed 25 cameras covering the whole zoo. The cameras are totally autonomous and they are configuring themselves in a wireless ad-hoc network. At strategic locations RFID readers are deployed to identify animals in close proximity. The camera network deployed in the zoo is continuous tracking animals in its field of view. By using data fusion from the camera system and the RFID readers we can get semi-continuous tracking of individual animals. The camera network has been running in the zoo for more than one year and about 5 000 hours of video has been captured and recorded. This will give us a very large dataset for offline development and testing of computer vision algorithms for animal detection and tracking.

  • 40.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Augmented reality to enhance vistors experience in a digital zoo2010In: Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia (ACM MUM'10), Limassol, Cyprus, 2010Conference paper (Refereed)
  • 41.
    Karlsson, Johannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Wark, Tim
    CSIRO.
    Ren, Keni
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Fahlquist, Karin
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Applications of wireless visual sensor networks: the digital zoo2010In: Visual information processing in wireless sensor networks: Technology, trends and applications / [ed] Li-minn Ang, Kah Phooi Seng, IGI Global , 2010Chapter in book (Other academic)
    Abstract [en]

    In this chapter we will describe our work to set up a large scale wireless visual sensor network in a Swedish zoo. It is located close to the Arctic Circle making the environment very hard for this type of deployment. The goal is to make the zoo digitally enhanced, leading to a more attractive and interactive zoo. To reach this goal the sensed data will be processed and semantic information will be used to support interaction design, which is a key component to provide a new type of experience for the visitors. In this chapter we will describe our research work related to the various aspects of a digital zoo

  • 42.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Réhman, Shafiq ur
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Embodied tele-presence system (ETS): designing tele-presence for video teleconferencing2014In: Design, user experience, and usability: User experience design for diverse interaction platforms and environments / [ed] Aaron Marcus, Springer International Publishing Switzerland, 2014, Vol. 8518, p. 574-585Conference paper (Refereed)
    Abstract [en]

    In spite of the progress made in tele conferencing over the last decades, however, it is still far from a resolved issue. In this work, we present an intuitive video teleconferencing system, namely - Embodied Tele-Presence System (ETS) which is based on embodied interaction concept. This work proposes the results of a user study considering the hypothesis: “ Embodied interaction based video conferencing system performs better than the standard video conferencing system in representing nonverbal behaviors, thus creating a ‘feeling of presence’ of a remote person among his/her local collaborators”. Our ETS integrates standard audio-video conferencing with mechanical embodiment of head gestures of a remote person (as nonverbal behavior) to enhance the level of interaction. To highlight the technical challenges and design principles behind such tele-presence systems, we have also performed a system evaluation which shows the accuracy and efficiency of our ETS design. The paper further provides an overview of our case study and an analysis of our user evaluation. The user study shows that the proposed embodied interaction approach in video teleconferencing increases ‘in-meeting interaction’ and enhance a ‘feeling of presence’ among remote participant and his collaborators.

  • 43.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system2016In: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967, Vol. 31, no 5, p. 2597-2610Article in journal (Refereed)
    Abstract [en]

    Socially interactive systems are embodied agents that engage in social interactions with humans. From a design perspective, these systems are built by considering a biologically inspired design (Bio-inspired) that can mimic and simulate human-like communication cues and gestures. The design of a bio-inspired system usually consists of (i) studying biological characteristics, (ii) designing a similar biological robot, and (iii) motion planning, that can mimic the biological counterpart. In this article, we present a design, development, control-strategy and verification of our socially interactive bio-inspired robot, namely - Telepresence Mechatronic Robot (TEBoT). The key contribution of our work is an embodiment of a real human-neck movements by, i) designing a mechatronic platform based on the dynamics of a real human neck and ii) capturing the real head movements through our novel single-camera based vision algorithm. Our socially interactive bio-inspired system is based on an intuitive integration-design strategy that combines computer vision based geometric head pose estimation algorithm, model based design (MBD) approach and real-time motion planning techniques. We have conducted an extensive testing to demonstrate effectiveness and robustness of our proposed system.

  • 44.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    La Hera, Pedro
    Liu, Feng
    Li, Haibo
    A pilot user's prospective in mobile robotic telepresence system2014In: 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014), IEEE, 2014Conference paper (Refereed)
    Abstract [en]

    In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.

  • 45.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Lu, Zhihan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Royal Institute of Technology (KTH), Stockholm, Sweden.
    Head Orientation Modeling: Geometric Head Pose Estimation using Monocular Camera2013In: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013, 2013, p. 149-153Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).

  • 46.
    Kouma, Jean-Paul
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Large-scale face images retrieval: a distribution coding approach2009In: ICUMT 2009 - International Conference on Ultra Modern Telecommunications, 2009Conference paper (Other academic)
    Abstract [en]

    Great progress in face recognition technology has been made recently. Such advances will provide us the possibility to build a new generation of search engine: Face Google, searching from person photos. It is very challenging to find a person from a very large or extremely large database which might hold face images of millions or hundred millions of people. The indexing technology used in most commercial search engines like Google, is very efficient for text-based search, unfortunately, it is no longer useful for image search. A solution is to use partial information (signature) about all the face images for search. The retrieval speed is approximately proportional to the size of a signature image. In this paper we will study a totally new way to compress the signature images based on the observation that the face signature images and the query images are highly correlated if they are from the same individual. The face signature image can be greatly compressed (one or two orders of magnitude improvement) by use of knowledge of the query images. We can expect the new compression algorithm to speed up face search 10 to 100 times. The challenge is that query images are not available when we compress their signature image. Our approach is to transfer the face search problem into the so-called ”Wyner-Ziv Coding” problem, which could give the same compression efficiency even if the query images are not available until we decompress signature images. A practical compression scheme based on LDPC codes is developed to compress face signature images.

  • 47.
    Kouma, Jean-Paul
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Large-scale face images retrieval: a transform coding approach2010Conference paper (Other academic)
    Abstract [en]

    Huge efforts have been devoted to face recognition technology and remarkable results, noticed. Such advances will provide us the possibility to build a new generation of search engine: persons photo fetching. It is a real computing challenge to find a person from a very large or extremely large database which might hold face images of millions or hundred millions of people. A candidate solution is to use partial information (signature) about all the face images for search, making the retrieval speed approximately proportional to the size of a signature image. In this paper we will investigate a totally new way to compress the signature images based on the observation that the face signature images and the query images are highly correlated if they are from the same individual. The face signature image can be greatly compressed (one or two orders of magnitude improvement) by use of knowledge of the query images. We can expect the new compression algorithm to speed up face search 10 to 100 times. The challenge is that query images are not available when we compress their signature image. Our approach is to transfer the face search problem into the so-called ”Wyner-Ziv Coding” problem, which could give the same compression efficiency even if the query images are not available until we decompress signature images. A practical compression scheme based on LDPC codes is developed to compress and retrieve face signature images.

  • 48.
    Le, Hung-Son
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Anani, Adi
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    High Dynamic Range Imaging Through Multi-Resolusion Spline Fusion2007In: 20th International Symposium on Signal Processing and its Applications (ISSPA), 2007., 2007, p. 1-4Conference paper (Refereed)
  • 49.
    Le, Hung-Son
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Can the surveillance system run pose variant face recognition in real time?2006In: Proc. of the ICCV Workshop in Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS), Beijing, China, October 2006., 2006, p. 209-216Conference paper (Refereed)
  • 50.
    Le, Hung-Son
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Fused Logarithmic Transform for Contrast Enhancement2008In: Electronics Letters, ISSN 0013-5194, Vol. 44, no 1, p. 19-20Article in journal (Refereed)
123 1 - 50 of 128
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf