Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Li, Haibo
Alternative names
Publications (10 of 134) Show all publications
Cheng, X., Yang, B., Liu, G., Olofsson, T. & Li, H. (2018). A total bounded variation approach to low visibility estimation on expressways. Sensors, 18(2), Article ID 392.
Open this publication in new window or tab >>A total bounded variation approach to low visibility estimation on expressways
Show others...
2018 (English)In: Sensors, E-ISSN 1424-8220, Vol. 18, no 2, article id 392Article in journal, Editorial material (Refereed) Published
Abstract [en]

Low visibility on expressways caused by heavy fog and haze is a main reason for traffic accidents. Real-time estimation of atmospheric visibility is an effective way to reduce traffic accident rates. With the development of computer technology, estimating atmospheric visibility via computer vision becomes a research focus. However, the estimation accuracy should be enhanced since fog and haze are complex and time-varying. In this paper, a total bounded variation (TBV) approach to estimate low visibility (less than 300 m) is introduced. Surveillance images of fog and haze are processed as blurred images (pseudo-blurred images), while the surveillance images at selected road points on sunny days are handled as clear images, when considering fog and haze as noise superimposed on the clear images. By combining image spectrum and TBV, the features of foggy and hazy images can be extracted. The extraction results are compared with features of images on sunny days. Firstly, the low visibility surveillance images can be filtered out according to spectrum features of foggy and hazy images. For foggy and hazy images with visibility less than 300 m, the high-frequency coefficient ratio of Fourier (discrete cosine) transform is less than 20%, while the low-frequency coefficient ratio is between 100% and 120%. Secondly, the relationship between TBV and real visibility is established based on machine learning and piecewise stationary time series analysis. The established piecewise function can be used for visibility estimation. Finally, the visibility estimation approach proposed is validated based on real surveillance video data. The validation results are compared with the results of image contrast model. Besides, the big video data are collected from the Tongqi expressway, Jiangsu, China. A total of 1,782,000 frames were used and the relative errors of the approach proposed are less than 10%.

Place, publisher, year, edition, pages
MDPI, 2018
Keywords
total bounded variation, image spectrum, low visibility estimation, piece stationary, fog and haze
National Category
Environmental Analysis and Construction Information Technology Remote Sensing
Identifiers
urn:nbn:se:umu:diva-144176 (URN)10.3390/s18020392 (DOI)000427544000075 ()29382181 (PubMedID)2-s2.0-85041434425 (Scopus ID)
Available from: 2018-01-24 Created: 2018-01-24 Last updated: 2023-03-24Bibliographically approved
Li, B., Li, H. & Söderström, U. (2016). Distinctive curve features. Electronics Letters, 52(3), 197-198
Open this publication in new window or tab >>Distinctive curve features
2016 (English)In: Electronics Letters, ISSN 0013-5194, E-ISSN 1350-911X, Vol. 52, no 3, p. 197-198Article in journal (Refereed) Published
Abstract [en]

Curves and lines are geometrical, abstract features of an image. Whereas interest points are more limited, curves and lines provide much more information of the image structure. However, the research done in curve and line detection is very fragmented. The concept of scale space is not yet fused very well into curve and line detection. Keypoint (e.g. SIFT, SURF, ORB) is a successful concept which represent features (e.g. blob, corner etc.) in scale space. Stimulated by the keypoint concept, a method which extracts distinctive curves (DICU) in scale space, including lines as a special form of curve features is proposed. A curve feature can be represented by three keypoints (two end points, and one middle point). A good way to test the quality of detected curves is to analyse the repeatability under various image transformations. DICU using the standard Oxford benchmark is evaluated. The overlap error is calculated by averaging the overlap error of three keypoints on the curve. Experiment results show that DICU achieves good repeatability comparing with other state-of-the-art methods. To match curve features, a relatively uncomplicated way is to combine local descriptors of three keypoints on each curve.

Place, publisher, year, edition, pages
John Wiley & Sons, 2016
Keywords
curve detection, line detection, feature matching
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Signal Processing
Identifiers
urn:nbn:se:umu:diva-111184 (URN)10.1049/el.2015.3495 (DOI)000369674000014 ()2-s2.0-85136434803 (Scopus ID)
Available from: 2015-11-06 Created: 2015-11-06 Last updated: 2024-07-02Bibliographically approved
Khan, M. S., Li, H. & ur Réhman, S. (2016). Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system. Journal of Intelligent & Fuzzy Systems, 31(5), 2597-2610
Open this publication in new window or tab >>Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system
2016 (English)In: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967, Vol. 31, no 5, p. 2597-2610Article in journal (Refereed) Published
Abstract [en]

Socially interactive systems are embodied agents that engage in social interactions with humans. From a design perspective, these systems are built by considering a biologically inspired design (Bio-inspired) that can mimic and simulate human-like communication cues and gestures. The design of a bio-inspired system usually consists of (i) studying biological characteristics, (ii) designing a similar biological robot, and (iii) motion planning, that can mimic the biological counterpart. In this article, we present a design, development, control-strategy and verification of our socially interactive bio-inspired robot, namely - Telepresence Mechatronic Robot (TEBoT). The key contribution of our work is an embodiment of a real human-neck movements by, i) designing a mechatronic platform based on the dynamics of a real human neck and ii) capturing the real head movements through our novel single-camera based vision algorithm. Our socially interactive bio-inspired system is based on an intuitive integration-design strategy that combines computer vision based geometric head pose estimation algorithm, model based design (MBD) approach and real-time motion planning techniques. We have conducted an extensive testing to demonstrate effectiveness and robustness of our proposed system.

Keywords
Socially interactive robot, biologically inspired robot, head pose estimation, vision based robot control, model based design, embodied telepresence system
National Category
Robotics Computer Vision and Robotics (Autonomous Systems) Interaction Technologies
Identifiers
urn:nbn:se:umu:diva-108552 (URN)10.3233/JIFS-169100 (DOI)000386532000015 ()2-s2.0-84992110994 (Scopus ID)
Available from: 2015-09-14 Created: 2015-09-14 Last updated: 2023-03-24Bibliographically approved
Alaa, H. & Haibo, L. (2016). Template-based Search: A Tool for Scene Analysis. In: 12th IEEE International Colloquium on Signal Processing & its Applications (CSPA): Proceeding. Paper presented at 12th IEEE Colloquium on Signal Processing and its Applications (CSPA 2016), Malacca, Malaysia, Mars 04-06, 2016. IEEE, Article ID 7515772.
Open this publication in new window or tab >>Template-based Search: A Tool for Scene Analysis
2016 (English)In: 12th IEEE International Colloquium on Signal Processing & its Applications (CSPA): Proceeding, IEEE, 2016, article id 7515772Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes a simple and yet effective technique for shape-based scene analysis, in which detection and/or tracking of specific objects or structures in the image is desirable. The idea is based on using predefined binary templates of the structures to be located in the image. The template is matched to contours in a given edge image to locate the designated entity. These templates are allowed to deform in order to deal with variations in the structure's shape and size. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust results in cluttered and noisy scenes in the applications presented.

Place, publisher, year, edition, pages
IEEE, 2016
Keywords
detection, dynamic programming, Saliency, Scene analysis, tracking
National Category
Computer Vision and Robotics (Autonomous Systems) Computer and Information Sciences
Identifiers
urn:nbn:se:umu:diva-118583 (URN)10.1109/CSPA.2016.7515772 (DOI)000389632900001 ()2-s2.0-84983494524 (Scopus ID)978-1-4673-8780-4 (ISBN)
Conference
12th IEEE Colloquium on Signal Processing and its Applications (CSPA 2016), Malacca, Malaysia, Mars 04-06, 2016
Available from: 2016-03-23 Created: 2016-03-23 Last updated: 2023-03-24Bibliographically approved
Halawani, A., ur Réhman, S. & Li, H. (2015). Active Vision for Tremor Disease Monitoring. In: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences AHFE 2015: . Paper presented at 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015), JUL 26-30, 2015, Las Vegas, NV (pp. 2042-2048). , 3
Open this publication in new window or tab >>Active Vision for Tremor Disease Monitoring
2015 (English)In: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences AHFE 2015, 2015, Vol. 3, p. 2042-2048Conference paper, Published paper (Refereed)
Abstract [en]

The aim of this work is to introduce a prototype for monitoring tremor diseases using computer vision techniques.  While vision has been previously used for this purpose, the system we are introducing differs intrinsically from other traditional systems. The essential difference is characterized by the placement of the camera on the user’s body rather than in front of it, and thus reversing the whole process of motion estimation. This is called active motion tracking. Active vision is simpler in setup and achieves more accurate results compared to traditional arrangements, which we refer to as “passive” here. One main advantage of active tracking is its ability to detect even tiny motions using its simple setup, and that makes it very suitable for monitoring tremor disorders. 

Series
Procedia Manufacturing, ISSN 2351-9789
Keywords
Active vision, Tremors, SIFT, Motion estimation, Motion tracking
National Category
Computer Vision and Robotics (Autonomous Systems) Computer Sciences
Identifiers
urn:nbn:se:umu:diva-109206 (URN)10.1016/j.promfg.2015.07.252 (DOI)000383740302022 ()2-s2.0-85009949429 (Scopus ID)
Conference
6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015), JUL 26-30, 2015, Las Vegas, NV
Available from: 2015-09-22 Created: 2015-09-22 Last updated: 2023-03-24Bibliographically approved
Abedan Kondori, F., Yousefi, S., Kouma, J.-P., Liu, L. & Li, H. (2015). Direct hand pose estimation for immersive gestural interaction. Pattern Recognition Letters, 66, 91-99
Open this publication in new window or tab >>Direct hand pose estimation for immersive gestural interaction
Show others...
2015 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 66, p. 91-99Article in journal (Refereed) Published
Abstract [en]

This paper presents a novel approach for performing intuitive gesture based interaction using depth data acquired by Kinect. The main challenge to enable immersive gestural interaction is dynamic gesture recognition. This problem can be formulated as a combination of two tasks; gesture recognition and gesture pose estimation. Incorporation of fast and robust pose estimation method would lessen the burden to a great extent. In this paper we propose a direct method for real-time hand pose estimation. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Extensive experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation On two different setups; desktop computing, and mobile platform. This reveals the system capability to accommodate different interaction procedures. In addition, a user study is conducted to evaluate learnability, user experience and interaction quality in 3D gestural interaction in comparison to 2D touchscreen interaction.

Keywords
Immersive gestural interaction, Dynamic gesture recognition, Hand pose estimation
National Category
Signal Processing
Identifiers
urn:nbn:se:umu:diva-86748 (URN)10.1016/j.patrec.2015.03.013 (DOI)000362271100011 ()2-s2.0-84943197653 (Scopus ID)
Available from: 2014-03-06 Created: 2014-03-06 Last updated: 2023-03-23Bibliographically approved
Halawani, A. & Li, H. (2015). Human Ear Localization: A Template-based Approach. In: : . Paper presented at ICOPR 2015, International Workshop on Pattern Recognition, Dubai, May 4-5, 2015.
Open this publication in new window or tab >>Human Ear Localization: A Template-based Approach
2015 (English)Conference paper, Oral presentation only (Other academic)
Abstract [en]

We propose a simple and yet effective technique for shape-based ear localization. The idea is based on using a predefined binary ear template that is matched to ear contours in a given edge image. To cope with changes in ear shapes and sizes, the template is allowed to deform. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust localization results in various cluttered and noisy setups.

National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-109204 (URN)
Conference
ICOPR 2015, International Workshop on Pattern Recognition, Dubai, May 4-5, 2015
Available from: 2015-09-22 Created: 2015-09-22 Last updated: 2019-06-18Bibliographically approved
Lv, Z., Halawani, A., Feng, S., ur Réhman, S. & Li, H. (2015). Touch-less interactive augmented reality game on vision-based wearable device. Personal and Ubiquitous Computing, 19(3-4), 551-567
Open this publication in new window or tab >>Touch-less interactive augmented reality game on vision-based wearable device
Show others...
2015 (English)In: Personal and Ubiquitous Computing, ISSN 1617-4909, E-ISSN 1617-4917, Vol. 19, no 3-4, p. 551-567Article in journal (Refereed) Published
Abstract [en]

There is an increasing interest in creating pervasive games based on emerging interaction technologies. In order to develop touch-less, interactive and augmented reality games on vision-based wearable device, a touch-less motion interaction technology is designed and evaluated in this work. Users interact with the augmented reality games with dynamic hands/feet gestures in front of the camera, which triggers the interaction event to interact with the virtual object in the scene. Three primitive augmented reality games with eleven dynamic gestures are developed based on the proposed touch-less interaction technology as proof. At last, a comparing evaluation is proposed to demonstrate the social acceptability and usability of the touch-less approach, running on a hybrid wearable framework or with Google Glass, as well as workload assessment, user’s emotions and satisfaction.

Keywords
Wearable device, Smartphone game, Hand free, Pervasive game, Augmented reality game, Touch-less
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-109203 (URN)10.1007/s00779-015-0844-1 (DOI)000357471100006 ()2-s2.0-84943362188 (Scopus ID)
Available from: 2015-09-22 Created: 2015-09-22 Last updated: 2023-03-23Bibliographically approved
Yousefi, S., Li, H. & Liu, L. (2014). 3D Gesture Analysis Using a Large-Scale Gesture Database. In: Bebis, G; Boyle, R; Parvin, B; Koracin, D; McMahan, R; Jerald, J; Zhang, H; Drucker, SM; Kambhamettu, C; ElChoubassi, M; Deng, Z; Carlson, M (Ed.), Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part I. Paper presented at 10th International Symposium on Visual Computing (ISVC), DEC 08-10, 2014, Las Vegas, NV (pp. 206-217).
Open this publication in new window or tab >>3D Gesture Analysis Using a Large-Scale Gesture Database
2014 (English)In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; McMahan, R; Jerald, J; Zhang, H; Drucker, SM; Kambhamettu, C; ElChoubassi, M; Deng, Z; Carlson, M, 2014, p. 206-217Conference paper, Published paper (Refereed)
Abstract [en]

3D gesture analysis is a highly desired feature of future interaction design. Specifically, in augmented environments, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities. This paper, introduces a novel solution for real-time 3D gesture analysis using an extremely large gesture database. This database includes the images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique search algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query input and database and retrieving the best match. Once the best match is found from the database in real-time, the pre-calculated 3D parameters can instantly be used for gesture-based interaction.

Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8887
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-106166 (URN)10.1007/978-3-319-14249-4_20 (DOI)000354694000020 ()2-s2.0-84916602858 (Scopus ID)978-3-319-14249-4 (ISBN)978-3-319-14248-7 (ISBN)
Conference
10th International Symposium on Visual Computing (ISVC), DEC 08-10, 2014, Las Vegas, NV
Available from: 2015-07-09 Created: 2015-07-09 Last updated: 2023-03-24Bibliographically approved
Abedan Kondori, F., Yousefi, S., Ostovar, A., Liu, L. & Li, H. (2014). A Direct Method for 3D Hand Pose Recovery. In: 22nd International Conference on Pattern Recognition: . Paper presented at 22ND International Conference on Pattern Recognition (ICPR, 24–28 August 2014, Stockholm, Sweden (pp. 345-350).
Open this publication in new window or tab >>A Direct Method for 3D Hand Pose Recovery
Show others...
2014 (English)In: 22nd International Conference on Pattern Recognition, 2014, p. 345-350Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a novel approach for performing intuitive 3D gesture-based interaction using depth data acquired by Kinect. Unlike current depth-based systems that focus only on classical gesture recognition problem, we also consider 3D gesture pose estimation for creating immersive gestural interaction. In this paper, we formulate gesture-based interaction system as a combination of two separate problems, gesture recognition and gesture pose estimation. We focus on the second problem and propose a direct method for recovering hand motion parameters. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Our experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation. This application is intended to explore the system capabilities in real-time biomedical applications. Eventually, system usability test is conducted to evaluate the learnability, user experience and interaction quality in 3D interaction in comparison to 2D touch-screen interaction.

Series
International Conference on Pattern Recognition, ISSN 1051-4651
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:umu:diva-108475 (URN)10.1109/ICPR.2014.68 (DOI)000359818000057 ()2-s2.0-84919919226 (Scopus ID)978-1-4799-5208-3 (ISBN)
Conference
22ND International Conference on Pattern Recognition (ICPR, 24–28 August 2014, Stockholm, Sweden
Available from: 2015-09-14 Created: 2015-09-11 Last updated: 2023-03-23Bibliographically approved
Organisations

Search in DiVA

Show all publications