umu.sePublications
Change search
Refine search result
123 101 - 128 of 128
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 101.
    ur Réhman, Shafiq
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Liu, Li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Lipless tracking and emotion estimation2007In: Proceedings of IEEE 3rd International Conference on Signal ImageTechnology & Internet based Systems , Shanghai, China: IEEE , 2007, p. 768-774Conference paper (Refereed)
    Abstract [en]

    Automatic human lip tracking is one of the key components to many facial image analysis tasks, such as, lip-reading and emotion from lips. It has been a classical hard image analysis problem over decades. In this paper, we propose an indirect lip tracking strategy: ‘lipless tracking’. It is based on the observation that many of us don’t have clear lips and some even don’t have visible lips. The strategy is to select and localize stable lip features around the mouth for tracking. For this purpose deformable contour-segments are modelled based on lip features and tracking is done using dynamic programming and viterbi algorithm. The strength of proposed algorithm is demonstrated in emotion estimation domain. Finally, real-time video experiments performed on private and publicly available data sets (MMI face database) have shown the robustness of our proposed lipless tracking technique.

  • 102.
    Ur Réhman, Shafiq
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Liu, Li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Manifold of Facial Expression for Tactile Perception2007In: IEEE International Workshop on Multimedia Signal Processing.(MMSP07), 2007, Greece, 2007, p. 239-242Conference paper (Refereed)
  • 103.
    ur Réhman, Shafiq
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Liu, Li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Tactile car warning system2005In: Proceedings of first joint Euro Haptics conference and symposium on haptic interfaces for virtual environment and teleoperator systems, Pisa, Italy: IEEE , 2005Conference paper (Other academic)
    Abstract [en]

    Driving on the busy road is a critical job. Drivers need to combine all senses to solve to handle upcoming events and situations. According to recent survey pedestrian based accidents represent a huge portion of traffic accidents in EU, it is stated that more than 200,000 pedestrians are injured and about 9,000 are killed in accidents yearly. Enormous amounts of research have been on the detectionDriving on the busy road is a critical job. Drivers need to combine all senses to solve to handle upcoming events and situations. According to recent survey pedestrian based accidents represent a huge portion of traffic accidents in EU, it is stated that more than 200,000 pedestrians are injured and about 9,000 are killed in accidents yearly. Enormous amounts of research have been on the detection of pedestrian from moving platform using different image processing techniques like shape/texture-based method. Currently, Guilloux and his colleague pointed out the advantages of using infrared cameras. A few pedestrian detection systems using infrared video sequences have been experimented as well. The possibilities of using the human hands as tactile sensory input are explored by the researchers in order to obtain precise knowledge for building tactual display. Recently a number of vibrotactile devices have been accessible for experimental as well as commercial reason. We present a driver assistant system which will provide tactual alert on detection of the pedestrians. The Different issues regarding the development of driver assistant program are considered. Here the Template matching based pedestrian detection in infrared videos is performed. Finally, ‘Driver assistant system’ experiment is presented.

  • 104.
    ur Réhman, Shafiq
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    liu, li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Sun, Jiong
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Improving User Communication and Entertainment Experience by Vibration2006In: Swedish symposium on Image Analysis, 2006., 2006, p. 81-85Conference paper (Other academic)
  • 105.
    ur Réhman, Shafiq
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Sun, Jiong
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Liu, Li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Turn your mobile into the football: rendering live football game by vibration2008In: IEEE transactions on multimedia, ISSN 1520-9210, E-ISSN 1941-0077, Vol. 10, no 6, p. 1022-1033Article in journal (Refereed)
    Abstract [en]

    Vibration offers many potential benefits for the use of mobile phones. In this paper, we propose a new method of rendering live football game on mobile phones using vibration. A mobile phone is “synchronized” with the ball in the real field. By holding the phone, users are able to experience dynamic movements of the ball, to know attacking directions and which team is leading the attack. The usability test of our system shows that vibrotactile display is suitable for rendering live football information on mobile phones by adopting designed coding schemes with a right training process.

  • 106. Wu, Jinsong
    et al.
    Bisio, Igor
    Gniady, Chris
    Hossain, Ekram
    Valla, Massimo
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    CONTEXT-AWARE NETWORKING AND COMMUNICATIONS: PART 12014In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 52, no 6, p. 14-15Article in journal (Other academic)
  • 107. Wu, Jinsong
    et al.
    Bisio, Igor
    Gniady, Chris
    Hossain, Ekram
    Valla, Massimo
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Umeå Centre for Interaction Technology (UCIT). KTH, Stockholm, Sweden.
    Context-aware networking and communications: part 22014In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 52, no 8, p. 64-65Article in journal (Other academic)
  • 108.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    A Networked Yawn Detector2003Report (Other academic)
    Abstract [en]

    The driver’s mental state could be estimated from visual clues. The typical driver’s fatigue event could be detected or predicted by the dynamic facial expression events such like yawn. This paper demonstrate a networked surveillance system, where the driver’s facial expression parameters are extracted from real time video of face in car and sent via wireless network to a surveillance center, where the parameters could be evaluated to find if the driver is under fatigue situation. The parameter extraction using the Model-based coding (MBC) technique. A Hidden Markov Model (HMM) is used for recognizing the yawn event which characterize a typical fatigue event.

    A prototype of such a networked system was set up and subjected to user tests. Promising results from user tests and their subjective evaluations are reported.

  • 109.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    A video based real time fatigue detection system2004Report (Other academic)
    Abstract [en]

    Fatigue could be detected or predicted by the dynamic facial expression event: yawn. Facial expression parameters are extracted from real time video containing faces and used as observations for the underlying Hidden Markov Model for recognizing the yawn event. Promising results from tests by many users and their evaluations are reported.

  • 110.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Haar-like feature based face tracking with dynamic programming2004In: Proceedings, Symposium on Image Analysis, 2004, p. 186-189Conference paper (Refereed)
  • 111.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Initialization of Model Based Coding for Customizing Networked Conference Video2003Report (Other academic)
    Abstract [en]

    This paper considers the problem of video content adaptation for a mobile user in a networked conference application. Due to big variation of network bandwidth, it is challenging to keep the accessibility to the video content for mobile user. This paper suggests using an UMA engine for customizing video according to mobile user’s network environment. The video content customization could be achieved by using the normal video codec when network transmission is smooth and switching to a very low bit rate Model based coding (MBC) codec when network bandwidth is small. As an crucial step of this customization, the initialization of the MBC is studied in detail. Initialization of MBC is to fit a generic face model onto the face of the talking head appearing in the first video frame. The paper discusses what is a proper initialization scheme for the UMA engine, and suggests a strategy for solving the initialization problem, using Simulated annealing (SA) within a ABS framework. Promising performance results from the suggested approach are reported.

  • 112.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Initialization of Model Based Coding in Adaptive Video Transmission: An Analysis-By-Synthesis approach2003Report (Other academic)
    Abstract [en]

    This paper consider the problem of video content adaptation for the teleconference application in a heterogeneous network communication system. Due to the diversity of content providers and content consumers in the network, the content of the network transportation is often preferred to be adaptive for both efficiency and reliability in the communication. The video content adaptation could be achieved by using normal video codec when network transmission is smooth and switching to a very low bit rate Model based coding (MBC) codec when network congestion happen. An inevitable step of this switching is the initialization of the MBC, that is, to fit a genetic face model onto the first video frame. This paper revised the Analysis-by-synthesis(ABS) approach used in many vision based tracking problems and report our strategy of implementing the initialization using an ABS approach. A perturbation based global optimization algorithm, Simulated annealing (SA) is examined within the ABS framework. Problems are identified and different ways are studied, promising performance results of our suggested approach are reported.

  • 113.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Is A Magnetic Sensor Capable of Evaluating A Vision-Based Face Tracking System?2003Report (Other academic)
    Abstract [en]

    This paper addresses an important issue, how to evaluate a vision-based face tracking system? Although nowadays it is getting popular to employ a magnetic sensor to evaluate the performance of such systems. The related issues such as condition and limitation of usage are often omitted. In this paper we studied this accepted evaluation methodology together with another evaluation method, Peak Signal to Noise (PSNR) commonly used inimage coding community. The condition of proper usage of magnetic sensor as evaluating system is discussed. Our theoretical analysis and experiments with real video sequences show that we have to be very careful to select the so-called ”ground truth”. We believe that to help further development of face tracking techniques, a valid performance evaluation is necessary, both the evaluating system and the tracking system have to be jointly considered to decide if the evaluating method is valid. The experimental results give us further hints about the tracking performance when using different tracking scheme.

  • 114.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Model based video coding using open-loop architecture2004Report (Other academic)
    Abstract [en]

    This paper addresses an important issue in Model-based coding, namely, how to extract the motion information of a head object from a video sequence. Traditional methods use a closed-loop coding architecture. In this paper we try to use an open-loop coding architecture instead. Active tracking is chosen for motion estimation. Two cameras are utilized to estimate global and local motion independently. Our theoretical analysis and experiment show that this is a cost effective way to set up the Model-based coding system.

  • 115.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Sample Based Texture extraction for Model based coding2004Report (Other academic)
    Abstract [en]

    This paper addresses an important issue in Model-based coding, namely, how to extract the facial texture. The traditional method is to utilize a cyber scanner to get the user’s facial texture together with personal 3D head model, and use the extracted texture and model in model based coding. It is an expensive and complicate method. In this paper, we suggest to use sample based method to incorporate both motion extraction and facial texture extraction within one coding loop. This is proved to be an effective and cheaper way for developing Model based coding system.

  • 116.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Semi-automatic initialization for model based coding: a facial feature point based approach2004In: Proceedings, Symposium on Image Analysis, 2004, p. 114-117Conference paper (Refereed)
  • 117.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Semi-Automatic Initialization for Model Based Coding: A Facial Feature Point Based Approach2003Report (Other academic)
    Abstract [en]

    In this paper we address the initialization problem for model-based coding system. A semi-automatic scheme based on using feature points is proposed. It has three features: 1) uses personal feature points, 2) user’s assistance is necessary, and 3) it is a global optimal solution. The minimum spanning tree (MST) technique is used to organize feature points into an ordered path and a dynamic programming based matching technique is developed and used to localize the defined feature points. Promising results are obtained and reported.

  • 118.
    Yao, Zhengrong
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Tracking a detected face with dynamic programming2006In: Image and Vision Computing, ISSN 0262-8856, E-ISSN 1872-8138, Vol. 24, no 6, p. 573-580Article in journal (Refereed)
    Abstract [en]

    In this paper, we consider the problem of tracking a moving human face in front of a video camera in real-time for a model-based coding application. The 3D head tracking in a MBC system could be implemented sequentially as 2D location tracking, coarse 3D orientation estimation and accurate 3D motion estimation. This work focuses on the 2D location tracking of one face object through continuously using a face detector. The face detection scheme is based on a boosted cascade of simple Haar-like feature classifiers. Although such a detector demonstrated rapid processing speed, high detection rate can only be achieved for rather strictly near front faces. This introduces the ‘loss of tracking’ problem when used in 2D tracking. This paper suggests an easy method of solving the pose problem by using the technique of Dynamic Programming. The Haar-like facial features used in the 2D face detector are spatially arranged into a 1D deformable face graph and the Dynamic Programming matching is used to handle the ‘loss of track’ problem. Dynamic Programming matches the deformed version of the face graph extracted from a rotated face with the template taken online before ‘loss of tracking’ happens. Since the deformable face graph covers a big pose variation, the developed technique is robust in tracking rotated faces. Embedding Haar-like facial features into a deformable face graph is the key feature of our tracking scheme. A real time tracking system based on this technique has been set up and tested. Encouraging results have been got and are reported.

  • 119.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Abedan Kondori, Farid
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Tracking fingers in 3D space for mobile interaction2010Conference paper (Refereed)
    Abstract [en]

    Number of mobile devices such as mobile phones or PDAs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays which make the interaction with device easier and more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera's field of view.In this paper, our gestural interaction relies heavily on particular patterns from local orientation of the image called Rotational Symmetries. This approach is based on finding the most suitable pattern from the large set of rotational symmetries of different orders which ensures a reliable detector for fingertips and human gesture. Consequently, gesture detection and tracking can be used as an efficient tool for 3D manipulation in various applications in computer vision and augmented reality.

  • 120.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kondori, Farid Abedan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    3D Gestural Interaction for Stereoscopic Visualization on Mobile Devices2011In: Computer Analysis of Images and Patterns: 14th International Conference, CAIP 2011, PT 2 / [ed] Real, P; DiazPernil, D; MolinaAbril, H; Berciano, A; Kropatsch, W, Berlin: Springer Berlin/Heidelberg, 2011, p. 555-562Conference paper (Refereed)
    Abstract [en]

    Number of mobile devices such as smart phones or Tablet PCs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays which make the interaction with device more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera's field of view. In this paper, our gestural interaction heavily relies on particular patterns from local orientation of the image called Rotational Symmetries. This approach is based on finding the most suitable pattern from a large set of rotational symmetries of different orders which ensures a reliable detector for hand gesture. Consequently, gesture detection and tracking can be hired as an efficient tool for 3D manipulation in various applications in computer vision and augmented reality. The final output will be rendered into color anaglyphs for 3D visualization. Depending on the coding technology different low cost 3D glasses will be used for viewers.

  • 121.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kondori, Farid Abedan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    3D Visualization of Monocular Images in Photo Collections2011Conference paper (Refereed)
  • 122.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kondori, Farid Abedan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    3D visualization of single images using patch level depth2011In: 2011 PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND MULTIMEDIA APPLICATIONS (SIGMAP 2011) / [ed] Barranco, AL & Tsihrintzis, G, IEEE, 2011, p. 1-6Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the task of 3D photo visualization using a single monocular image. The main idea is to use single photos taken by capturing devices such as ordinary cameras, mobile phones, tablet PCs etc. and visualize them in 3D on normal displays. Supervised learning approach is hired to retrieve depth information from single images. This algorithm is based on the hierarchical multi-scale Markov Random Field (MRF) which models the depth based on the multi-scale global and local features and relation between them in a monocular image. Consequently, the estimated depth image is used to allocate the specified depth parameters for each pixel in the 3D map. Accordingly, the multi-level depth adjustments and coding for color anaglyphs is performed. Our system receives a single 2D image as input and provides a anaglyph coded 3D image in output. Depending on the coding technology the special low-cost anaglyph glasses for viewers will be used.

  • 123.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kondori, Farid Abedan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Camera-based gesture tracking for 3D interaction behind mobile devices2012In: International journal of pattern recognition and artificial intelligence, ISSN 0218-0014, Vol. 26, no 8, p. 1260008-Article in journal (Refereed)
    Abstract [en]

    Number of mobile devices such as Smartphones or Tablet PCs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays that make the interaction with the device easier and more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device in the camera's field of view. In this paper, our gestural interaction relies on particular patterns from local orientation of the image called rotational symmetries. This approach is based on finding the most suitable pattern from a large set of rotational symmetries of diffrerent orders that ensures a reliable detector for fingertips and user's gesture. Consequently, gesture detection and tracking can be used as an efficient tool for 3D manipulation in various virtual/augmented reality applications.

  • 124.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kondori, Farid Abedan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Experiencing real 3D gestural interaction with mobile devices2013In: Pattern Recognition Letters, ISSN 0167-8655, E-ISSN 1872-7344, Vol. 34, no 8, p. 912-921Article in journal (Refereed)
    Abstract [en]

    Number of mobile devices such as smart phones or Tablet PCs has been dramatically increased over the recent years. New mobile devices are equipped with integrated cameras and large displays which make the interaction with the device more efficient. Although most of the previous works on interaction between humans and mobile devices are based on 2D touch-screen displays, camera-based interaction opens a new way to manipulate in 3D space behind the device, in the camera's field of view. In this paper, our gestural interaction heavily relies on particular patterns from local orientation of the image called Rotational Symmetries. This approach is based on finding the most suitable pattern from a large set of rotational symmetries of different orders that ensures a reliable detector for hand gesture. Consequently, gesture detection and tracking can be hired as an efficient tool for 3D manipulation in various applications in computer vision and augmented reality. The final output will be rendered into color anaglyphs for 3D visualization. Depending on the coding technology, different low cost 3D glasses can be used for the viewers. (C) 2013 Elsevier B.V. All rights reserved.

  • 125.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kondori, Farid Abedan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Robust correction of 3D geo-metadata in photo collections by forming a photo grid2011In: 2011 International Conference on Wireless Communications and Signal Processing (WCSP), IEEE, 2011, p. 1-5Conference paper (Refereed)
    Abstract [en]

    In this work, we present a technique for efficient and robust estimation of the exact location and orientation of a photo capture device in a large data set. The provided data set includes a set of photos and the associated information from GPS and orientation sensor. This attached metadata is noisy and lacks precision. Our strategy to correct this uncertain data is based on the data fusion between measurement model, derived from sensor data, and signal model given by the computer vision algorithms. Based on the retrieved information from multiple views of a scene we make a grid of images. Our robust feature detection and matching between images result in finding a reliable transformation. Consequently, relative location and orientation of the data set construct the signal model. On the other hand, information extracted from the single images combined with the measurement data make the measurement model. Finally, Kalman filter is used to fuse these two models iteratively and enhance the estimation of the ground truth(GT) location and orientation. Practically, this approach can help us to design a photo browsing system from a huge collection of photos, enabling 3D navigation and exploration of our huge data set.

  • 126.
    Yousefi, Shahrouz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Kondori, Farid Abedan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Stereoscopic visualization of monocular images in photo collections2011Conference paper (Refereed)
  • 127. Yousefi, Shahrouz
    et al.
    Li, Haibo
    Liu, Li
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    3D Gesture Analysis Using a Large-Scale Gesture Database2014In: Advances in Visual Computing: 10th International Symposium, ISVC 2014, Las Vegas, NV, USA, December 8-10, 2014, Proceedings, Part I / [ed] Bebis, G; Boyle, R; Parvin, B; Koracin, D; McMahan, R; Jerald, J; Zhang, H; Drucker, SM; Kambhamettu, C; ElChoubassi, M; Deng, Z; Carlson, M, 2014, p. 206-217Conference paper (Refereed)
    Abstract [en]

    3D gesture analysis is a highly desired feature of future interaction design. Specifically, in augmented environments, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities. This paper, introduces a novel solution for real-time 3D gesture analysis using an extremely large gesture database. This database includes the images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique search algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query input and database and retrieving the best match. Once the best match is found from the database in real-time, the pre-calculated 3D parameters can instantly be used for gesture-based interaction.

  • 128.
    Zhang, Deng-yin
    et al.
    Institute of Signal Processing and Transmission, Nanjing University of Posts and Telecommunications, Nanjing 210003, China.
    Chen, Jia-ping
    Institute of Signal Processing and Transmission, Nanjing University of Posts and Telecommunications,China.
    Anani, Adi
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    An adaptive image watermarking algorithm based on perceptual model2008In: Journal of Communication and Computer, ISSN 1548-7709, Vol. 5, no 2, p. 1-7Article in journal (Refereed)
    Abstract [en]

    An adaptive image watermarking algorithm based on Watson’s perceptual model is proposed in this paper. The proposed watermarking algorithm fully considers image regional characteristic. First, the cover image is divided into different smooth regions according to its gray value. Then watermark adjustment factors of those different regions are calculated respectively based on Watson’s perceptual model. The watermark is embedded by the adjustment factors at last. Experiments show that the proposed algorithm has excellent imperceptibility and has little influence upon the eigenvalue of the cover image.

123 101 - 128 of 128
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf