umu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Liu, Li
Alternative names
Publications (10 of 27) Show all publications
Chen, M., Xia, D., Min, C., Zhao, X., Chen, Y., Liu, L. & Li, X. (2016). Neonatal repetitive pain in rats leads to impaired spatial learning and dysregulated hypothalamic-pituitary-adrenal axis function in later life. Scientific Reports, 6, Article ID 39159.
Open this publication in new window or tab >>Neonatal repetitive pain in rats leads to impaired spatial learning and dysregulated hypothalamic-pituitary-adrenal axis function in later life
Show others...
2016 (English)In: Scientific Reports, ISSN 2045-2322, E-ISSN 2045-2322, Vol. 6, article id 39159Article in journal (Refereed) Published
Abstract [en]

Preterm birth is a major health issue. As part of their life-saving care, most preterm infants require hospitalization and are inevitably exposed to repetitive skin-breaking procedures. The long-term effects of neonatal repetitive pain on cognitive and emotional behaviors involving hypothalamic-pituitary-adrenal (HPA) axis function in young and adult rats are unknown. From P8 to P85, mechanical hypersensitivity of the bilateral hindpaws was observed in the Needle group (P < 0.001). Compared with the Tactile group, the Needle group took longer to find the platform on P30 than on P29 (P = 0.03), with a decreased number of original platform site crossings during the probe trial of the Morris water maze test (P = 0.026). Moreover, the Needle group spent more time and took longer distances in the central area than the Tactile group in the Open-field test, both in prepubertal and adult rats (P < 0.05). The HPA axis function in the Needle group differed from the Tactile group (P < 0.05), with decreased stress responsiveness in prepuberty and puberty (P < 0.05) and increased stress responsiveness in adulthood (P < 0.05). This study indicates that repetitive pain that occurs during a critical period may cause severe consequences, with behavioral and neuroendocrine disturbances developing through prepuberty to adult life.

Place, publisher, year, edition, pages
Nature Publishing Group, 2016
National Category
Physiology Neurosciences
Identifiers
urn:nbn:se:umu:diva-129707 (URN)10.1038/srep39159 (DOI)000389814700001 ()
Available from: 2017-01-10 Created: 2017-01-09 Last updated: 2018-06-09Bibliographically approved
Abedan Kondori, F., Yousefi, S., Kouma, J.-P., Liu, L. & Li, H. (2015). Direct hand pose estimation for immersive gestural interaction. Pattern Recognition Letters, 66, 91-99
Open this publication in new window or tab >>Direct hand pose estimation for immersive gestural interaction
Show others...
2015 (English)In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 66, p. 91-99Article in journal (Refereed) Published
Abstract [en]

This paper presents a novel approach for performing intuitive gesture based interaction using depth data acquired by Kinect. The main challenge to enable immersive gestural interaction is dynamic gesture recognition. This problem can be formulated as a combination of two tasks; gesture recognition and gesture pose estimation. Incorporation of fast and robust pose estimation method would lessen the burden to a great extent. In this paper we propose a direct method for real-time hand pose estimation. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Extensive experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation On two different setups; desktop computing, and mobile platform. This reveals the system capability to accommodate different interaction procedures. In addition, a user study is conducted to evaluate learnability, user experience and interaction quality in 3D gestural interaction in comparison to 2D touchscreen interaction.

Keywords
Immersive gestural interaction, Dynamic gesture recognition, Hand pose estimation
National Category
Signal Processing
Identifiers
urn:nbn:se:umu:diva-86748 (URN)10.1016/j.patrec.2015.03.013 (DOI)000362271100011 ()
Available from: 2014-03-06 Created: 2014-03-06 Last updated: 2018-06-08Bibliographically approved
Yousefi, S., Li, H. & Liu, L. (2014). 3D Gesture Analysis Using a Large-Scale Gesture Database. In: Bebis, G; Boyle, R; Parvin, B; Koracin, D; McMahan, R; Jerald, J; Zhang, H; Drucker, SM; Kambhamettu, C; ElChoubassi, M; Deng, Z; Carlson, M (Ed.), Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part I. Paper presented at 10th International Symposium on Visual Computing (ISVC), DEC 08-10, 2014, Las Vegas, NV (pp. 206-217).
Open this publication in new window or tab >>3D Gesture Analysis Using a Large-Scale Gesture Database
2014 (English)In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; McMahan, R; Jerald, J; Zhang, H; Drucker, SM; Kambhamettu, C; ElChoubassi, M; Deng, Z; Carlson, M, 2014, p. 206-217Conference paper, Published paper (Refereed)
Abstract [en]

3D gesture analysis is a highly desired feature of future interaction design. Specifically, in augmented environments, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities. This paper, introduces a novel solution for real-time 3D gesture analysis using an extremely large gesture database. This database includes the images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique search algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query input and database and retrieving the best match. Once the best match is found from the database in real-time, the pre-calculated 3D parameters can instantly be used for gesture-based interaction.

Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8887
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-106166 (URN)10.1007/978-3-319-14249-4_20 (DOI)000354694000020 ()978-3-319-14249-4 (ISBN)978-3-319-14248-7 (ISBN)
Conference
10th International Symposium on Visual Computing (ISVC), DEC 08-10, 2014, Las Vegas, NV
Available from: 2015-07-09 Created: 2015-07-09 Last updated: 2018-06-07Bibliographically approved
Abedan Kondori, F., Yousefi, S., Ostovar, A., Liu, L. & Li, H. (2014). A Direct Method for 3D Hand Pose Recovery. In: 22nd International Conference on Pattern Recognition: . Paper presented at 22ND International Conference on Pattern Recognition (ICPR, 24–28 August 2014, Stockholm, Sweden (pp. 345-350).
Open this publication in new window or tab >>A Direct Method for 3D Hand Pose Recovery
Show others...
2014 (English)In: 22nd International Conference on Pattern Recognition, 2014, p. 345-350Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents a novel approach for performing intuitive 3D gesture-based interaction using depth data acquired by Kinect. Unlike current depth-based systems that focus only on classical gesture recognition problem, we also consider 3D gesture pose estimation for creating immersive gestural interaction. In this paper, we formulate gesture-based interaction system as a combination of two separate problems, gesture recognition and gesture pose estimation. We focus on the second problem and propose a direct method for recovering hand motion parameters. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Our experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation. This application is intended to explore the system capabilities in real-time biomedical applications. Eventually, system usability test is conducted to evaluate the learnability, user experience and interaction quality in 3D interaction in comparison to 2D touch-screen interaction.

Series
International Conference on Pattern Recognition, ISSN 1051-4651
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:umu:diva-108475 (URN)10.1109/ICPR.2014.68 (DOI)000359818000057 ()978-1-4799-5208-3 (ISBN)
Conference
22ND International Conference on Pattern Recognition (ICPR, 24–28 August 2014, Stockholm, Sweden
Available from: 2015-09-14 Created: 2015-09-11 Last updated: 2019-11-11Bibliographically approved
Abedan Kondori, F., Yousefi, S., Liu, L. & Li, H. (2014). Head operated electric wheelchair. In: IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI 2014): . Paper presented at IEEE Southwest Symposium on Image Analysis and Interpretation SSIAI (pp. 53-56). IEEE
Open this publication in new window or tab >>Head operated electric wheelchair
2014 (English)In: IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI 2014), IEEE , 2014, p. 53-56Conference paper, Published paper (Refereed)
Abstract [en]

Currently, the most common way to control an electric wheelchair is to use joystick. However, there are some individuals unable to operate joystick-driven electric wheelchairs due to sever physical disabilities, like quadriplegia patients. This paper proposes a novel head pose estimation method to assist such patients. Head motion parameters are employed to control and drive an electric wheelchair. We introduce a direct method for estimating user head motion, based on a sequence of range images captured by Kinect. In this work, we derive new version of the optical flow constraint equation for range images. We show how the new equation can be used to estimate head motion directly. Experimental results reveal that the proposed system works with high accuracy in real-time. We also show simulation results for navigating the electric wheelchair by recovering user head motion.

Place, publisher, year, edition, pages
IEEE, 2014
Series
IEEE Southwest Symposium on Image Analysis and Interpretation, ISSN 1550-5782
National Category
Signal Processing Computer Sciences
Identifiers
urn:nbn:se:umu:diva-86746 (URN)000355255900014 ()978-1-4799-4053-0 (ISBN)
Conference
IEEE Southwest Symposium on Image Analysis and Interpretation SSIAI
Available from: 2014-03-06 Created: 2014-03-06 Last updated: 2018-06-08Bibliographically approved
Kondori, F. A., Liu, L. & Li, H. (2014). Telelife: An immersive media experience for rehabilitation. In: Proceedings of the 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014): . Paper presented at 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014), 9-12 December 2014, Chiang Mai, Thailand. IEEE
Open this publication in new window or tab >>Telelife: An immersive media experience for rehabilitation
2014 (English)In: Proceedings of the 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014), IEEE, 2014Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, emergence of telerehabilitation systems for home-based therapy has altered healthcare systems. Telerehabilitation enables therapists to observe patients status via Internet, thus a patient does not have to visit rehabilitation facilities for every rehabilitation session. Despite the fact that telerehabilitation provides great opportunities, there are two major issues that affect effectiveness of telerehabilitation: relegation of the patient at home, and loss of direct supervision of the therapist. Since patients have no actual interaction with other persons during the rehabilitation period, they will become isolated and gradually lose their social skills. Moreover, without direct supervision of therapists, rehabilitation exercises can be performed with bad compensation strategies that lead to a poor quality recovery. To resolve these issues, we propose telelife, a new concept for future rehabilitation systems. The idea is to use media technology to create a totally new immersive media experience for rehabilitation. In telerehabilitation patients locally execute exercises, and therapists remotely monitor patients' status. In telelife patients, however, remotely perform exercises and therapists locally monitor. Thus, not only telelife enables rehabilitation at distance, but also improves the patients' social competences, and provides direct supervision of therapists. In this paper we introduce telelife to enhance telerehabilitation, and investigate technical challenges and possible methods to achieve telelife.

Place, publisher, year, edition, pages
IEEE, 2014
National Category
Signal Processing
Identifiers
urn:nbn:se:umu:diva-88093 (URN)10.1109/APSIPA.2014.7041675 (DOI)000392861900163 ()978-6-1636-1823-8 (ISBN)
Conference
2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014), 9-12 December 2014, Chiang Mai, Thailand
Available from: 2014-04-23 Created: 2014-04-23 Last updated: 2018-06-08Bibliographically approved
ur Réhman, S., Khan, M. S., Liu, L. & Li, H. (2014). Vibrotactile TV for immersive experience. In: Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific: . Paper presented at Annual Summit and Conference of Asia-Pacific-Signal-and-Information-Processing-Association (APSIPA), DEC 09-12, 2014, Angkor, CAMBODIA.
Open this publication in new window or tab >>Vibrotactile TV for immersive experience
2014 (English)In: Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific, 2014Conference paper, Published paper (Refereed)
Abstract [en]

Audio and video are two powerful media forms to shorten the distance between audience/viewer and actors or players in the TV and films. The recent research shows that today people are using more and more multimedia contents on mobile devices, such as tablets and smartphones. Therefore, an important question emerges - how can we render high-quality, personal immersive experiences to consumers on these systems? To give audience an immersive engagement that differs from `watching a play', we have designed a study to render complete immersive media which include the `emotional information' based on augmented vibrotactile-coding on the back of the user along with audio-video signal. The reported emotional responses to videos viewed with and without haptic enhancement, show that participants exhibited an increased emotional response to media with haptic enhancement. Overall, these studies suggest that the effectiveness of our approach and using a multisensory approach increase immersion and user satisfaction.

Keywords
tactile interface, multimodal interaction, interactive TV, Audio-Video processing
National Category
Signal Processing Interaction Technologies Computer Systems Telecommunications
Research subject
Computerized Image Analysis; human-computer interaction; Signal Processing
Identifiers
urn:nbn:se:umu:diva-101711 (URN)10.1109/APSIPA.2014.7041631 (DOI)000392861900119 ()978-6-1636-1823-8 (ISBN)
Conference
Annual Summit and Conference of Asia-Pacific-Signal-and-Information-Processing-Association (APSIPA), DEC 09-12, 2014, Angkor, CAMBODIA
Available from: 2015-04-09 Created: 2015-04-09 Last updated: 2018-06-07Bibliographically approved
Abedan Kondori, F., Yousefi, S. & Liu, L. (2013). Active human gesture capture for diagnosing and treating movement disorders. In: : . Paper presented at Proceeding of The Swedish Symposium on Image Analysis (SSBA2013), Gothenburg, Sweden.
Open this publication in new window or tab >>Active human gesture capture for diagnosing and treating movement disorders
2013 (English)Conference paper, Published paper (Other academic)
Abstract [en]

Movement disorders prevent many people fromenjoying their daily lives. As with other diseases, diagnosisand analysis are key issues in treating such disorders.Computer vision-based motion capture systems are helpfultools for accomplishing this task. However Classical motiontracking systems suffer from several limitations. First theyare not cost effective. Second these systems cannot detectminute motions accurately. Finally they are spatially limitedto the lab environment where the system is installed. In thisproject, we propose an innovative solution to solve the abovementionedissues. Mounting the camera on human body, webuild a convenient, low cost motion capture system that canbe used by the patient in daily-life activities. We refer tothis system as active motion capture, which is not confinedto the lab environment. Real-time experiments in our labrevealed the robustness and accuracy of the system.

National Category
Other Medical Engineering
Identifiers
urn:nbn:se:umu:diva-83352 (URN)
Conference
Proceeding of The Swedish Symposium on Image Analysis (SSBA2013), Gothenburg, Sweden
Available from: 2013-11-22 Created: 2013-11-22 Last updated: 2018-06-08Bibliographically approved
Abedan Kondori, F. & Liu, L. (2012). 3D Active Human Motion Estimation for Biomedical Applications. In: Mian Long (Ed.), World Congress on Medical Physics and Biomedical Engineering May 26-31, 2012, Beijing, China: . Paper presented at World Congress on Medical Physics and Biomedical Engineering (WC 2012), Beijing, China, 26-31 May 2012 (pp. 1014-1017). Springer Berlin/Heidelberg
Open this publication in new window or tab >>3D Active Human Motion Estimation for Biomedical Applications
2012 (English)In: World Congress on Medical Physics and Biomedical Engineering May 26-31, 2012, Beijing, China / [ed] Mian Long, Springer Berlin/Heidelberg, 2012, , p. 4p. 1014-1017Conference paper, Published paper (Refereed)
Abstract [en]

Movement disorders forbid many people from enjoying their daily lives. As with other diseases, diagnosis and analysis are key issues in treating such disorders. Computer vision-based motion capture systems are helpful tools for accomplishing this task. However Classical motion tracking systems suffer from several limitations. First they are not cost effective. Second these systems cannot detect minute motions accurately. Finally they are spatially limited to the lab environment where the system is installed. In this project, we propose an innovative solution to solve the above-mentioned issues. Mounting the camera on human body, we build a convenient, low cost motion capture system that can be used by the patient while practicing daily-life activities. We refer to this system as active motion capture, which is not confined to the lab environment. Real-time experiments in our lab revealed the robustness and accuracy of the system.

Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2012. p. 4
Series
IFMBE Proceedings, ISSN 1680-0737 ; 39
Keywords
Active motion tracking, Human motion analysis, Movement disorder, SIFT
National Category
Biomedical Laboratory Science/Technology
Identifiers
urn:nbn:se:umu:diva-55831 (URN)10.1007/978-3-642-29305-4_266 (DOI)978-364229304-7 (print) (ISBN)978-3-642-29305-4 (ISBN)
Conference
World Congress on Medical Physics and Biomedical Engineering (WC 2012), Beijing, China, 26-31 May 2012
Available from: 2012-06-04 Created: 2012-06-04 Last updated: 2018-06-08Bibliographically approved
Chen, M., Shi, X., Chen, Y., Cao, Z., Cheng, R., Xu, Y., . . . Li, X. (2012). A prospective study of pain experience in a neonatal intensive care unit of China. The Clinical Journal of Pain, 28(8), 700-704
Open this publication in new window or tab >>A prospective study of pain experience in a neonatal intensive care unit of China
Show others...
2012 (English)In: The Clinical Journal of Pain, ISSN 0749-8047, E-ISSN 1536-5409, Vol. 28, no 8, p. 700-704Article in journal (Refereed) Published
Abstract [en]

Objectives: To assess pain burden in neonates during their hospitalization in China and thus provide evidence for the necessity of neonatal pain management. Patients and Methods: The Neonatal Facial Coding System was used to evaluate pain in neonates. We prospectively collected data of all painful procedures performed on 108 neonates (term, 62; preterm, 46) recruited from admission to discharge in a neonatal intensive care unit of a university-affiliated hospital in China. Results: We found that during hospitalization each preterm and term neonate was exposed to a median of 100.0 (range, 11 to 544) and 56.5 (range, 12 to 249) painful procedures, respectively. Most of the painful procedures were performed within the first 3 days. Preterm neonates, especially those born at 28 and 29 weeks' gestational age, experienced more pain than those born at 30 weeks' gestation or later (P < 0.001). Among those painful procedures, tracheal aspiration was the most frequently performed on preterm neonates, and intravenous cannulation was the most common for term neonates. Moreover, tracheal intubations and femoral venous puncture were found to be the most painful. Notably, none of the painful procedures was accompanied by analgesia. Conclusions: Neonates, particularly preterm neonates, were exposed to numerous invasive painful procedures without appropriate analgesia in hospitals in China. The potential long-term impacts of poorly treated pain in neonates call for a change in pediatric practice in China and in countries with similar practices.

Place, publisher, year, edition, pages
Philadelphia, PA, USA: Lippingcott Williams & Wilkins, 2012
Keywords
neonates, procedural pain, neonatal intensive care unit, analgesia
National Category
Anesthesiology and Intensive Care Neurology
Identifiers
urn:nbn:se:umu:diva-60504 (URN)10.1097/AJP.0b013e3182400d54 (DOI)000308672100008 ()
Available from: 2012-10-16 Created: 2012-10-15 Last updated: 2018-06-08Bibliographically approved
Organisations

Search in DiVA

Show all publications