umu.sePublications
Change search
Refine search result
12 1 - 50 of 79
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abedin, Md Reaz Ashraful
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bensch, Suna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Self-supervised language grounding by active sensing combined with Internet acquired images and text2017In: Proceedings of the Fourth International Workshop on Recognition and Action for Scene Understanding (REACTS2017) / [ed] Jorge Dias George Azzopardi, Rebeca Marf, Málaga: REACTS , 2017, p. 71-83Conference paper (Refereed)
    Abstract [en]

    For natural and efficient verbal communication between a robot and humans, the robot should be able to learn names and appearances of new objects it encounters. In this paper we present a solution combining active sensing of images with text based and image based search on the Internet. The approach allows the robot to learn both object name and how to recognise similar objects in the future, all self-supervised without human assistance. One part of the solution is a novel iterative method to determine the object name using image classi- fication, acquisition of images from additional viewpoints, and Internet search. In this paper, the algorithmic part of the proposed solution is presented together with evaluations using manually acquired camera images, while Internet data was acquired through direct and reverse image search with Google, Bing, and Yandex. Classification with multi-classSVM and with five different features settings were evaluated. With five object classes, the best performing classifier used a combination of Pyramid of Histogram of Visual Words (PHOW) and Pyramid of Histogram of Oriented Gradient (PHOG) features, and reached a precision of 80% and a recall of 78%.

  • 2.
    Alaa, Halawani
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Haibo, Li
    School of Computer Science & Communication, Royal Institute of Technology (KTH), Stockholm, Sweden.
    Template-based Search: A Tool for Scene Analysis2016In: 12th IEEE International Colloquium on Signal Processing & its Applications (CSPA): Proceeding, IEEE, 2016, article id 7515772Conference paper (Refereed)
    Abstract [en]

    This paper proposes a simple and yet effective technique for shape-based scene analysis, in which detection and/or tracking of specific objects or structures in the image is desirable. The idea is based on using predefined binary templates of the structures to be located in the image. The template is matched to contours in a given edge image to locate the designated entity. These templates are allowed to deform in order to deal with variations in the structure's shape and size. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust results in cluttered and noisy scenes in the applications presented.

  • 3.
    Algers, Björn
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Stereo Camera Calibration Accuracy in Real-time Car Angles Estimation for Vision Driver Assistance and Autonomous Driving2018Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The automotive safety company Veoneer are producers of high end driver visual assistance systems, but the knowledge about the absolute accuracy of their dynamic calibration algorithms that estimate the vehicle’s orientation is limited.

    In this thesis, a novel measurement system is proposed to be used in gathering reference data of a vehicle’s orientation as it is in motion, more specifically the pitch and roll angle of the vehicle. Focus has been to estimate how the uncertainty of the measurement system is affected by errors introduced during its construction, and to evaluate its potential in being a viable tool in gathering reference data for algorithm performance evaluation.

    The system consisted of three laser distance sensors mounted on the body of the vehicle, and a range of data acquisition sequences with different perturbations were performed by driving along a stretch of road in Linköping with weights loaded in the vehicle. The reference data were compared to camera system data where the bias of the calculated angles were estimated, along with the dynamic behaviour of the camera system algorithms. The experimental results showed that the accuracy of the system exceeded 0.1 degrees for both pitch and roll, but no conclusions about the bias of the algorithms could be drawn as there were systematic errors present in the measurements.

  • 4.
    AliNazari, Mirian
    Umeå University, Faculty of Teacher Education, Department of Creative Studies.
    Kreativ Uppväxtmiljö: en studie av stadieteorier2007Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    I examensarbetet studerades bildutveckling som även jämförts med författarens egen uppväxtmiljö. Metoden har varit en litteraturstudie som behandlar ämnet estetiska uttrycksformer och kreativ uppväxt. Därtill har en granskning av författarens uppväxtmiljö gällande möjlighet till övande av kreativ förmåga tagits upp i relation till personlig utveckling. Jämförelse har gjorts med stadieteorier om utvecklande av barns bildanvändning. Genom dokumenterade av författarens egna bilder under tidiga år visades bildutveckling i de olika teckningsutvecklingsstadierna. Slutsatsen är att kreativ förmåga påverkas sannolikt av uppfostran fylld med möjligheten att få måla och teckna, något som bildlärare kan utveckla i arbetet med barn. Behov att som blivande lärare integrera bilden i de teoretiska ämnena kan utveckla dessa möjligheter ytterligare.

  • 5.
    Andersson, Axel
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Real-Time Feedback for Agility Training: Tracking of reflective markers using a time-of-flight camera2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
  • 6.
    Becher, Marina
    et al.
    Umeå University, Faculty of Science and Technology, Department of Ecology and Environmental Sciences.
    Börlin, Niclas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Klaminder, Jonatan
    Umeå University, Faculty of Science and Technology, Department of Ecology and Environmental Sciences.
    Measuring soil motion with terrestrial close range photogrammetry in periglacial environments2014In: EUCOP 4: Book of Abstracts / [ed] Gonçalo Vieira, Pedro Pina, Carla Mora and António Correia, University of Lisbon and the University of Évora , 2014, p. 351-351Conference paper (Other academic)
    Abstract [en]

    Cryoturbation plays an important role in the carbon cycle as it redistributes carbon deeper down in the soil where the cold temperature prevents microbial decomposition. This contribution is also included in recent models describing the long-term build up of carbon stocks in artic soils. Soil motion rate in cryoturbated soils is sparsely studied. This is because the internal factors maintaining cryoturbation will be affected by any excavation, making it impossible to remove soil samples or install pegs without changing the structure of the soil. So far, mainly the motion of soil surface markers on patterned ground has been used to infer lateral soil motion rates. However, such methods constrain the investigated area to a predetermined distribution of surface markers that may result in a loss of information regarding soil motion in other parts of the patterned ground surface.

    We present a novel method based on terrestrial close range (<5m) photogrammetry to calculate lateral and vertical soil motion across entire small-scale periglacial features, such as non-sorted circles (frost boils). Images were acquired by a 5-camera calibrated rig from at least 8 directions around a non-sorted circle. During acquisition, the rig was carried by one person in a backpack-like portable camera support system. Natural feature points were detected by SIFT and matched between images using the known epipolar geometry of the calibrated rig. The 3D coordinates of points matched between at least 3 images were calculated to create a point cloud of the surface of interest. The procedure was repeated during two consecutive years to be able to measure any net displacement of soil and calculate rates of soil motion. The technique was also applied to a peat palsa where multiple exposures where acquired of selected areas.

    The method has the potential to quantify areas of disturbance and estimate lateral and vertical soil motion in non-sorted circles. Furthermore, it should be possible to quantify peat erosion and rates of desiccation crack formations in peat palsas. This tool could provide new information about cryoturbation rates that could improve existing soil carbon models and increase our understanding about how soil carbon stocks will respond to climate change.

  • 7.
    Billing, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Cognition Rehearsed: Recognition and Reproduction of Demonstrated Behavior2012Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    The work presented in this dissertation investigates techniques for robot Learning from Demonstration (LFD). LFD is a well established approach where the robot is to learn from a set of demonstrations. The dissertation focuses on LFD where a human teacher demonstrates a behavior by controlling the robot via teleoperation. After demonstration, the robot should be able to reproduce the demonstrated behavior under varying conditions. In particular, the dissertation investigates techniques where previous behavioral knowledge is used as bias for generalization of demonstrations.

    The primary contribution of this work is the development and evaluation of a semi-reactive approach to LFD called Predictive Sequence Learning (PSL). PSL has many interesting properties applied as a learning algorithm for robots. Few assumptions are introduced and little task-specific configuration is needed. PSL can be seen as a variable-order Markov model that progressively builds up the ability to predict or simulate future sensory-motor events, given a history of past events. The knowledge base generated during learning can be used to control the robot, such that the demonstrated behavior is reproduced. The same knowledge base can also be used to recognize an on-going behavior by comparing predicted sensor states with actual observations. Behavior recognition is an important part of LFD, both as a way to communicate with the human user and as a technique that allows the robot to use previous knowledge as parts of new, more complex, controllers.

    In addition to the work on PSL, this dissertation provides a broad discussion on representation, recognition, and learning of robot behavior. LFD-related concepts such as demonstration, repetition, goal, and behavior are defined and analyzed, with focus on how bias is introduced by the use of behavior primitives. This analysis results in a formalism where LFD is described as transitions between information spaces. Assuming that the behavior recognition problem is partly solved, ways to deal with remaining ambiguities in the interpretation of a demonstration are proposed.

    The evaluation of PSL shows that the algorithm can efficiently learn and reproduce simple behaviors. The algorithm is able to generalize to previously unseen situations while maintaining the reactive properties of the system. As the complexity of the demonstrated behavior increases, knowledge of one part of the behavior sometimes interferes with knowledge of another parts. As a result, different situations with similar sensory-motor interactions are sometimes confused and the robot fails to reproduce the behavior.

    One way to handle these issues is to introduce a context layer that can support PSL by providing bias for predictions. Parts of the knowledge base that appear to fit the present context are highlighted, while other parts are inhibited. Which context should be active is continually re-evaluated using behavior recognition. This technique takes inspiration from several neurocomputational models that describe parts of the human brain as a hierarchical prediction system. With behavior recognition active, continually selecting the most suitable context for the present situation, the problem of knowledge interference is significantly reduced and the robot can successfully reproduce also more complex behaviors.

  • 8.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Robot learning from demonstration using predictive sequence learning2011In: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH, 2011, p. 235-250Chapter in book (Refereed)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 9. Billing, Erik
    et al.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars-Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Simultaneous recognition and reproduction of demonstrated behavior2015In: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, p. 43-53Article in journal (Refereed)
    Abstract [en]

    Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: (1) to identify which behavior that best matches current context and (2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction.

  • 10. Bontsema, Jan
    et al.
    Hemming, Jochen
    Pekkeriet, Erik
    Saeys, Wouter
    Edan, Yael
    Shapiro, Amir
    Hočevar, Marko
    Oberti, Roberto
    Armada, Manuel
    Ulbrich, Heinz
    Baur, Jörg
    Debilde, Benoit
    Best, Stanley
    Evain, Sébastien
    Gauchel, Wolfgang
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    CROPS: Clever Robots for Crops2015In: Engineering & Technology Reference, ISSN 2056-4007, Vol. 1, no 1Article in journal (Refereed)
    Abstract [en]

    In the EU-funded CROPS project robots are developed for site-specific spraying and selective harvesting of fruit and fruit vegetables. The robots are being designed to harvest crops, such as greenhouse vegetables, apples, grapes and for canopy spraying in orchards and for precision target spraying in grape vines. Attention is paid to the detection of obstacles for autonomous navigation in a safe way in plantations and forests. For the different applications, platforms were built. Sensing systems and vision algorithms have been developed. For software the Robot Operating System is used. A 9 degrees of freedom manipulator was designed and tested for sweet-pepper harvesting, apple harvesting and in close range spraying. For the applications different end-effectors were designed and tested. For sweet pepper a platform that can move in between the crop rows on the common greenhouse rail system which also serves as heating pipes was built. The apple harvesting platform is based on a current mechanical grape harvester. In discussion with growers so-called ‘walls of fruit trees’ have been designed which bring robots closer to the practice. A canopy-optimised sprayer has been designed as a trailed sprayer with a centrifugal blower. All the applications have been tested under practical conditions.

  • 11.
    Börlin, Niclas
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Grussenmeyer, Pierre
    INSA Strasbourg, France.
    Bundle adjustment with and without damping2013In: Photogrammetric Record, ISSN 0031-868X, E-ISSN 1477-9730, Vol. 28, no 144, p. 396-415Article in journal (Refereed)
    Abstract [en]

    The least squares adjustment (LSA) method is studied as an optimisation problem and shown to be equivalent to the undamped Gauss-Newton (GN) optimisation method. Three problem-independent damping modifications of the GN method are presented: the line-search method of Armijo (GNA); the Levenberg-Marquardt algorithm (LM); and Levenberg-Marquardt-Powell (LMP). Furthermore, an additional problem-specific "veto" damping technique, based on the chirality condition, is suggested. In a perturbation study on a terrestrial bundle adjustment problem the GNA and LMP methods with veto damping can increase the size of the pull-in region compared to the undamped method; the LM method showed less improvement. The results suggest that damped methods can, in many cases, provide a solution where undamped methods fail and should be available in any LSA software package. Matlab code for the algorithms discussed is available from the authors.

  • 12.
    Börlin, Niclas
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Grussenmeyer, Pierre
    INSA Strasbourg, France.
    Camera Calibration using the Damped Bundle Adjustment Toolbox2014In: ISPRS Annals - Volume II-5, 2014: ISPRS Technical Commission V Symposium 23–25 June 2014, Riva del Garda, Italy / [ed] F. Remondino and F. Menna, Copernicus GmbH , 2014, Vol. II-5, p. 89-96Conference paper (Refereed)
    Abstract [en]

    Camera calibration is one of the fundamental photogrammetric tasks. The standard procedure is to apply an iterative adjustment to measurements of known control points. The iterative adjustment needs initial values of internal and external parameters. In this paper we investigate a procedure where only one parameter - the focal length is given a specific initial value. The procedure is validated using the freely available Damped Bundle Adjustment Toolbox on five calibration data sets using varying narrow- and wide-angle lenses. The results show that the Gauss-Newton-Armijo and Levenberg-Marquardt-Powell bundle adjustment methods implemented in the toolbox converge even if the initial values of the focal length are between 1/2 and 32 times the true focal length, even if the parameters are highly correlated. Standard statistical analysis methods in the toolbox enable manual selection of the lens distortion parameters to estimate, something not available in other camera calibration toolboxes. A standardised camera calibration procedure that does not require any information about the camera sensor or focal length is suggested based on the convergence results. The toolbox source and data sets used in this paper are available from the authors.

  • 13.
    Börlin, Niclas
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Grussenmeyer, Pierre
    INSA Strasbourg, France.
    Experiments with Metadata-derived Initial Values and Linesearch Bundle Adjustment in Architectural Photogrammetry2013Conference paper (Refereed)
    Abstract [en]

    According to the Waldhäusl and Ogleby (1994) "3x3 rules", a well-designed close-range architetural photogrammetric project should include a sketch of the project site with the approximate position and viewing direction of each image. This orientation metadata is important to determine which part of the object each image covers. In principle, the metadata could be used as initial values for the camera external orientation (EO) parameters. However, this has rarely been used, partly due to convergence problem for the bundle adjustment procedure.

    In this paper we present a photogrammetric reconstruction pipeline based on classical methods and investigate if and how the linesearch bundle algorithms of Börlin et al. (2004) and/or metadata can be used to aid the reconstruction process in architectural photogrammetry when the classical methods fail. The primary initial values for the bundle are calculated by the five-point algorithm by Nistér (Stewénius et al., 2006). Should the bundle fail, initial values derived from metadata are calculated and used for a second bundle attempt.

    The pipeline was evaluated on an image set of the INSA building in Strasbourg. The data set includes mixed convex and non-convex subnetworks and a combination of manual and automatic measurements.

    The results show that, in general, the classical bundle algorithm with five-point initial values worked well. However, in cases where it did fail, linesearch bundle and/or metadata initial values did help. The presented approach is interesting for solving EO problems when the automatic orientation processes fail as well as to simplify keeping a link between the metadata containing the plan of how the project should have become and the actual reconstructed network as it turned out to be.

  • 14.
    Claesson, Kenji
    Umeå University, Faculty of Science and Technology, Physics.
    Implementation and Validation of Independent Vector Analysis2010Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This Master’s Thesis was part of the project called Multimodalanalysis at the Depart-ment of Biomedical Engineering and Informatics at the Ume˚ University Hospital inUme˚ Sweden. The aim of the project is to develop multivariate measurement anda,analysis methods of the skeletal muscle physiology. One of the methods used to scanthe muscle is functional ultrasound. In a study performed by the project group datawas aquired, where test subjects were instructed to follow a certain exercise scheme,which was measured. Since there currently is no superior method to analyze the result-ing data (in form of ultrasound video sequences) several methods are being looked at.One considered method is called Independent Vector Analysis (IVA). IVA is a statisticalmethod to find independent components in a mix of components. This Master’s Thesisis about segmenting and analyzing the ultrasound images with help of IVA, to validateif it is a suitable method for this kind of tasks.First the algorithm was tested on generated mixed data to find out how well itperformed. The results were very accurate, considering that the method only usesapproximations. Some expected variation from the true value occured though.When the algorithm was considered performing to satisfactory, it was tested on thedata gathered by the study and the result can very well reflect an approximation of truesolution, since the resulting segmented signals seem to move in a possible way. But themethod has weak sides (which have been tried to be minimized) and all error analysishas been done by human eye, which definitly is a week point. But for the time being itis more important to analyze trends in the signals, rather than analyze exact numbers.So as long as the signals behave in a realistic way the result can not be said to becompletley wrong. So the overall results of the method were deemed adequate for the application at hand.

  • 15.
    de Pierrefeu, Amicie
    et al.
    NeuroSpin, CEA, Gif-sur-Yvette, France.
    Löfstedt, Tommy
    Umeå University, Faculty of Medicine, Department of Radiation Sciences.
    Laidi, C.
    NeuroSpin, CEA, Gif-sur-Yvette, France; Institut National de la Santé et de la Recherche Médicale (INSERM), U955, Institut Mondor de Recherche Biomédicale, Psychiatrie Translationnelle, Créteil, France; Fondation Fondamental, Créteil, France; Pôle de Psychiatrie, Assistance Publique–Hôpitaux de Paris (AP-HP), Faculté, de Médecine de Créteil, DHU PePsy, Hôpitaux Universitaires Mondor, Créteil, France.
    Hadj-Selem, Fouad
    Energy Transition Institute: VeDeCoM, Versailles, France.
    Bourgin, Julie
    Department of Psychiatry, Louis-Mourier Hospital, AP-HP, Colombes, France; INSERM U894, Centre for Psychiatry and Neurosciences, Paris, France.
    Hajek, Tomas
    Department of Psychiatry, Dalhousie University, Halifax, NS, Canada; National Institute of Mental Health, Klecany, Czech Republic.
    Spaniel, Filip
    National Institute of Mental Health, Klecany, Czech Republic.
    Kolenic, Marian
    National Institute of Mental Health, Klecany, Czech Republic.
    Ciuciu, Philippe
    NeuroSpin, CEA, Gif-sur-Yvette, France; INRIA, CEA, Parietal team, University of Paris-Saclay, France.
    Hamdani, Nora
    Institut National de la Santé et de la Recherche Médicale (INSERM), U955, Institut Mondor de Recherche Biomédicale, Psychiatrie Translationnelle, Créteil, France; Fondation Fondamental, Créteil, France; Pôle de Psychiatrie, Assistance Publique–Hôpitaux de Paris (AP-HP), Faculté, de Médecine de Créteil, DHU PePsy, Hôpitaux Universitaires Mondor, Créteil, France.
    Leboyer, Marion
    Institut National de la Santé et de la Recherche Médicale (INSERM), U955, Institut Mondor de Recherche Biomédicale, Psychiatrie Translationnelle, Créteil, France; Fondation Fondamental, Créteil, France; Pôle de Psychiatrie, Assistance Publique–Hôpitaux de Paris (AP-HP), Faculté, de Médecine de Créteil, DHU PePsy, Hôpitaux Universitaires Mondor, Créteil, France.
    Fovet, Thomas
    Laboratoire de Sciences Cognitives et Sciences Affectives (SCALab-PsyCHIC), CNRS UMR 9193, University of Lille; Pôle de Psychiatrie, Unité CURE, CHU Lille, Lille, France.
    Jardri, Renaud
    INRIA, CEA, Parietal team, University of Paris-Saclay, France; Laboratoire de Sciences Cognitives et Sciences Affectives (SCALab-PsyCHIC), CNRS UMR 9193, University of Lille; Pôle de Psychiatrie, Unité CURE, CHU Lille, Lille, France.
    Houenou, Josselin
    NeuroSpin, CEA, Gif-sur-Yvette, France; Institut National de la Santé et de la Recherche Médicale (INSERM), U955, Institut Mondor de Recherche Biomédicale, Psychiatrie Translationnelle, Créteil, France; Fondation Fondamental, Créteil, France; Pôle de Psychiatrie, Assistance Publique–Hôpitaux de Paris (AP-HP), Faculté, de Médecine de Créteil, DHU PePsy, Hôpitaux Universitaires Mondor, Créteil, France.
    Duchesnay, Edouard
    NeuroSpin, CEA, Gif-sur-Yvette, France.
    Identifying a neuroanatomical signature of schizophrenia, reproducible across sites and stages, using machine-learning with structured sparsity2018In: Acta Psychiatrica Scandinavica, ISSN 0001-690X, E-ISSN 1600-0447, Vol. 138, p. 571-580Article in journal (Refereed)
    Abstract [en]

    ObjectiveStructural MRI (sMRI) increasingly offers insight into abnormalities inherent to schizophrenia. Previous machine learning applications suggest that individual classification is feasible and reliable and, however, is focused on the predictive performance of the clinical status in cross‐sectional designs, which has limited biological perspectives. Moreover, most studies depend on relatively small cohorts or single recruiting site. Finally, no study controlled for disease stage or medication's effect. These elements cast doubt on previous findings’ reproducibility.

    MethodWe propose a machine learning algorithm that provides an interpretable brain signature. Using large datasets collected from 4 sites (276 schizophrenia patients, 330 controls), we assessed cross‐site prediction reproducibility and associated predictive signature. For the first time, we evaluated the predictive signature regarding medication and illness duration using an independent dataset of first‐episode patients.

    ResultsMachine learning classifiers based on neuroanatomical features yield significant intersite prediction accuracies (72%) together with an excellent predictive signature stability. This signature provides a neural score significantly correlated with symptom severity and the extent of cognitive impairments. Moreover, this signature demonstrates its efficiency on first‐episode psychosis patients (73% accuracy).

    ConclusionThese results highlight the existence of a common neuroanatomical signature for schizophrenia, shared by a majority of patients even from an early stage of the disorder.

  • 16.
    Fonooni, Benjamin
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Jevtić, Aleksandar
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars-Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations2015In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 65, p. 24-39Article in journal (Refereed)
    Abstract [en]

    In domains where robots carry out human’s tasks, the ability to learn new behaviors easily and quickly plays an important role. Two major challenges with Learning from Demonstration (LfD) are to identify what information in a demonstrated behavior requires attention by the robot, and to generalize the learned behavior such that the robot is able to perform the same behavior in novel situations. The main goal of this paper is to incorporate Ant Colony Optimization (ACO) algorithms into LfD in an approach that focuses on understanding tutor's intentions and learning conditions to exhibit a behavior. The proposed method combines ACO algorithms with semantic networks and spreading activation mechanism to reason and generalize the knowledge obtained through demonstrations. The approach also provides structures for behavior reproduction under new circumstances. Finally, applicability of the system in an object shape classification scenario is evaluated.

  • 17.
    Forsman, Mona
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Point cloud densification2010Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Several automatic methods exist for creating 3D point clouds extracted from 2D photos. In manycases, the result is a sparse point cloud, unevenly distributed over the scene.After determining the coordinates of the same point in two images of an object, the 3D positionof that point can be calculated using knowledge of camera data and relative orientation. A model created from a unevenly distributed point clouds may loss detail and precision in thesparse areas. The aim of this thesis is to study methods for densification of point clouds.

    This thesis contains a literature study over different methods for extracting matched point pairs,and an implementation of Least Square Template Matching (LSTM) with a set of improvementtechniques. The implementation is evaluated on a set of different scenes of various difficulty. LSTM is implemented by working on a dense grid of points in an image and Wallis filtering isused to enhance contrast. The matched point correspondences are evaluated with parameters fromthe optimization in order to keep good matches and discard bad ones. The purpose is to find detailsclose to a plane in the images, or on plane-like surfaces. A set of extensions to LSTM is implemented in the aim of improving the quality of the matchedpoints. The seed points are improved by Transformed Normalized Cross Correlation (TNCC) andMultiple Seed Points (MSP) for the same template, and then tested to see if they converge to thesame result. Wallis filtering is used to increase the contrast in the image. The quality of the extractedpoints are evaluated with respect to correlation with other optimization parameters and comparisonof standard deviation in x- and y- direction. If a point is rejected, the option to try again with a largertemplate size exists, called Adaptive Template Size (ATS).

  • 18.
    Forsman, Mona
    et al.
    Department of Forest Resource Management, Swedish University of Agricultural Sciences, 90183 Umeå, Sweden.
    Börlin, Niclas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Holmgren, Johan
    Department of Forest Resource Management, Swedish University of Agricultural Sciences, 90183 Umeå, Sweden.
    Estimation of Tree Stem Attributes using Terrestrial Photogrammetry with a Camera Rig2016In: Forests, ISSN 1999-4907, E-ISSN 1999-4907, Vol. 7, no 3, article id 61Article in journal (Refereed)
    Abstract [en]

    We propose a novel photogrammetric method for field plot inventory, designed for simplicity and time efficiency on-site. A prototype multi-camera rig was used to acquire images from field plot centers in multiple directions. The acquisition time on-site was less than two minutes. From each view, a point cloud was generated using a novel, rig-based matching of detected SIFT keypoints. Stems were detected in the merged point cloud, and their positions and diameters were estimated. The method was evaluated on 25 hemi-boreal forest plots of a 10-m radius. Due to difficult lighting conditions and faulty hardware, imagery from only six field plots was processed. The method performed best on three plots with clearly visible stems with a 76% detection rate and 0% commission. Dieameters could be estimated for 40% of the stems with an RMSE of 2.8-9.5 cm. The results are comparable to other camera-based methods evaluated in a similar manner. The results are inferior to TLS-based methods. However, our method is easily extended to multiple station image schemas, something that could significantly improve the results while retaining low commission errors and time on-site. Furthermore, with smaller hardware, we believe this could be a useful technique for measuring stem attributes in the forest.

  • 19.
    Garpebring, Anders
    et al.
    Umeå University, Faculty of Medicine, Department of Radiation Sciences.
    Brynolfsson, Patrik
    Umeå University, Faculty of Medicine, Department of Radiation Sciences.
    Kuess, Peter
    Department of Radiotherapy, Medical University of Vienna, Vienna, Austria; Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Vienna, Austria.
    Georg, Dietmar
    Department of Radiotherapy, Medical University of Vienna, Vienna, Austria; Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Vienna, Austria.
    Helbich, Thomas H.
    Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria; Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Vienna, Austria.
    Nyholm, Tufve
    Umeå University, Faculty of Medicine, Department of Radiation Sciences.
    Löfstedt, Tommy
    Umeå University, Faculty of Medicine, Department of Radiation Sciences.
    Density Estimation of Grey-Level Co-Occurrence Matrices for Image Texture Analysis2018In: Physics in Medicine and Biology, ISSN 0031-9155, E-ISSN 1361-6560, Vol. 63, no 19, p. 9-15, article id 195017Article in journal (Refereed)
    Abstract [en]

    The Haralick texture features are common in the image analysis literature, partly because of their simplicity and because their values can be interpreted. It was recently observed that the Haralick texture features are very sensitive to the size of the GLCM that was used to compute them, which led to a new formulation that is invariant to the GLCM size. However, these new features still depend on the sample size used to compute the GLCM, i.e. the size of the input image region-of-interest (ROI).

    The purpose of this work was to investigate the performance of density estimation methods for approximating the GLCM and subsequently the corresponding invariant features.

    Three density estimation methods were evaluated, namely a piece-wise constant distribution, the Parzen-windows method, and the Gaussian mixture model. The methods were evaluated on 29 different image textures and 20 invariant Haralick texture features as well as a wide range of different ROI sizes.

    The results indicate that there are two types of features: those that have a clear minimum error for a particular GLCM size for each ROI size, and those whose error decreases monotonically with increased GLCM size. For the first type of features, the Gaussian mixture model gave the smallest errors, and in particular for small ROI sizes (less than about 20×20).

    In conclusion, the Gaussian mixture model is the preferred method for the first type of features (in particular for small ROIs). For the second type of features, simply using a large GLCM size is preferred.

  • 20.
    Guillemot, Vincent
    et al.
    Bioinformatics and Biostatistics Hub, Institut Pasteur, Paris, France.
    Beaton, Derek
    The Rotman Research Institute, Institution at Baycrest, Toronto, Canada.
    Gloaguen, Arnaud
    L2S, UMR CNRS 8506, CNRS–Centrale Supélec–Université Paris-Sud, Université Paris-Saclay, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette, France.
    Löfstedt, Tommy
    Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics.
    Levine, Brian
    The Rotman Research Institute, Institution at Baycrest, Toronto, Canada.
    Raymond, Nicolas
    IRMAR, UMR 6625, Université de Rennes, Rennes, France.
    Tenenhaus, Arthur
    L2S, UMR CNRS 8506, CNRS–Centrale Supélec–Université Paris-Sud, Université Paris-Saclay, 3 rue Joliot-Curie, Gif-sur-Yvette, France.
    Abdi, Hervé
    School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States of America.
    A constrained singular value decomposition method that integrates sparsity and orthogonality2019In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 14, no 3, article id e0211463Article in journal (Refereed)
    Abstract [en]

    We propose a new sparsification method for the singular value decomposition—called the constrained singular value decomposition (CSVD)—that can incorporate multiple constraints such as sparsification and orthogonality for the left and right singular vectors. The CSVD can combine different constraints because it implements each constraint as a projection onto a convex set, and because it integrates these constraints as projections onto the intersection of multiple convex sets. We show that, with appropriate sparsification constants, the algorithm is guaranteed to converge to a stable point. We also propose and analyze the convergence of an efficient algorithm for the specific case of the projection onto the balls defined by the norms L1 and L2. We illustrate the CSVD and compare it to the standard singular value decomposition and to a non-orthogonal related sparsification method with: 1) a simulated example, 2) a small set of face images (corresponding to a configuration with a number of variables much larger than the number of observations), and 3) a psychometric application with a large number of observations and a small number of variables. The companion R-package, csvd, that implements the algorithms described in this paper, along with reproducible examples, are available for download from https://github.com/vguillemot/csvd.

  • 21.
    Hadj-Selem, Fouad
    et al.
    Energy Transition Institute VeDeCoM, Versailles, France.
    Löfstedt, Tommy
    Umeå University, Faculty of Medicine, Department of Radiation Sciences.
    Dohmatob, Elvis
    PARIETAL Team, INRIA/CEA, Université Paris-Saclay, Gif-sur-Yvette, France.
    Frouin, Vincent
    NeuroSpin, CEA, Université Paris-Saclay, Gif-sur-Yvette, France.
    Dubois, Mathieu
    NeuroSpin, CEA, Université Paris-Saclay, Gif-sur-Yvette, France.
    Guillemot, Vincent
    NeuroSpin, CEA, Université Paris-Saclay, Gif-sur-Yvette, France.
    Duchesnay, Edouard
    NeuroSpin, CEA, Université Paris-Saclay, Gif-sur-Yvette, France.
    Continuation of Nesterov's Smoothing for Regression With Structured Sparsity in High-Dimensional Neuroimaging2018In: IEEE Transactions on Medical Imaging, ISSN 0278-0062, E-ISSN 1558-254X, Vol. 37, no 11, p. 2403-2413Article in journal (Refereed)
    Abstract [en]

    Predictive models can be used on high-dimensional brain images to decode cognitive states or diagnosis/prognosis of a clinical condition/evolution. Spatial regularization through structured sparsity offers new perspectives in this context and reduces the risk of overfitting the model while providing interpretable neuroimaging signatures by forcing the solution to adhere to domain-specific constraints. Total variation (TV) is a promising candidate for structured penalization: it enforces spatial smoothness of the solution while segmenting predictive regions from the background. We consider the problem of minimizing the sum of a smooth convex loss, a non-smooth convex penalty (whose proximal operator is known) and a wide range of possible complex, non-smooth convex structured penalties such as TV or overlapping group Lasso. Existing solvers are either limited in the functions they can minimize or in their practical capacity to scale to high-dimensional imaging data. Nesterov’s smoothing technique can be used to minimize a large number of non-smooth convex structured penalties. However, reasonable precision requires a small smoothing parameter, which slows down the convergence speed to unacceptable levels. To benefit from the versatility of Nesterov’s smoothing technique, we propose a first order continuation algorithm, CONESTA, which automatically generates a sequence of decreasing smoothing parameters. The generated sequence maintains the optimal convergence speed toward any globally desired precision. Our main contributions are: gap to probe the current distance to the global optimum in order to adapt the smoothing parameter and the To propose an expression of the duality convergence speed. This expression is applicable to many penalties and can be used with other solvers than CONESTA. We also propose an expression for the particular smoothing parameter that minimizes the number of iterations required to reach a given precision. Furthermore, we provide a convergence proof and its rate, which is an improvement over classical proximal gradient smoothing methods. We demonstrate on both simulated and high-dimensional structural neuroimaging data that CONESTA significantly outperforms many state-of-the-art solvers in regard to convergence speed and precision.

  • 22.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. Computer Engineering Department, Palestine Polytechnic University, Hebron, Palestine.
    Li, Haibo
    KTH.
    100 lines of code for shape-based object localization2016In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 60, p. 458-472Article in journal (Refereed)
    Abstract [en]

    We introduce a simple and effective concept for localizing objects in densely cluttered edge images based on shape information. The shape information is characterized by a binary template of the object's contour, provided to search for object instances in the image. We adopt a segment-based search strategy, in which the template is divided into a set of segments. In this work, we propose our own segment representation that we callone-pixel segment (OPS), in which each pixel in the template is treated as a separate segment. This is done to achieve high flexibility that is required to account for intra-class variations. OPS representation can also handle scale changes effectively. A dynamic programming algorithm uses the OPS representation to realize the search process, enabling a detailed localization of the object boundaries in the image. The concept's simplicity is reflected in the ease of implementation, as the paper's title suggests. The algorithm works directly with very noisy edge images extracted using the Canny edge detector, without the need for any preprocessing or learning steps. We present our experiments and show that our results outperform those of very powerful, state-of-the-art algorithms.

  • 23.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Human Ear Localization: A Template-based Approach2015Conference paper (Other academic)
    Abstract [en]

    We propose a simple and yet effective technique for shape-based ear localization. The idea is based on using a predefined binary ear template that is matched to ear contours in a given edge image. To cope with changes in ear shapes and sizes, the template is allowed to deform. Deformation is achieved by dividing the template into segments. The dynamic programming search algorithm is used to accomplish the matching process, achieving very robust localization results in various cluttered and noisy setups.

  • 24.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Active Vision for Tremor Disease Monitoring2015In: 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences AHFE 2015, 2015, Vol. 3, p. 2042-2048Conference paper (Refereed)
    Abstract [en]

    The aim of this work is to introduce a prototype for monitoring tremor diseases using computer vision techniques.  While vision has been previously used for this purpose, the system we are introducing differs intrinsically from other traditional systems. The essential difference is characterized by the placement of the camera on the user’s body rather than in front of it, and thus reversing the whole process of motion estimation. This is called active motion tracking. Active vision is simpler in setup and achieves more accurate results compared to traditional arrangements, which we refer to as “passive” here. One main advantage of active tracking is its ability to detect even tiny motions using its simple setup, and that makes it very suitable for monitoring tremor disorders. 

  • 25.
    Halawani, Alaa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Anani, Adi
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Active vision for controlling an electric wheelchair2012In: Intelligent Service Robotics, ISSN 1861-2776, Vol. 5, no 2, p. 89-98Article in journal (Refereed)
    Abstract [en]

    Most of the electric wheelchairs available in the market are joystick-driven and therefore assume that the user is able to use his hand motion to steer the wheelchair. This does not apply to many users that are only capable of moving the head like quadriplegia patients. This paper presents a vision-based head motion tracking system to enable such patients of controlling the wheelchair. The novel approach that we suggest is to use active vision rather than passive to achieve head motion tracking. In active vision-based tracking, the camera is placed on the user’s head rather than in front of it. This makes tracking easier, more accurate and enhances the resolution. This is demonstrated theoretically and experimentally. The proposed tracking scheme is then used successfully to control our electric wheelchair to navigate in a real world environment.

  • 26.
    Hallén, Mattias
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Comminution control using reinforcement learning: Comparing control strategies for size reduction in mineral processing2018Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In mineral processing the grinding comminution process is an integral part since it is often the bottleneck of the concentrating process, thus small improvements may lead to large savings. By implementing a Reinforcement Learning controller this thesis aims to investigate if it is possible to control the grinding process more efficiently compared to traditional control strategies. Based on a calibrated plant simulation we compare existing control strategies with Proximal Policy Optimization and show possible increase in profitability under certain conditions.

  • 27.
    Hallén, Mattias
    et al.
    ABB Corporate Research.
    Åstrand, Max
    ABB Corporate Research.
    Sikström, Johannes
    Boliden.
    Servin, Martin
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Reinforcement Learning for Grinding Circuit Control in Mineral Processing2019Conference paper (Refereed)
    Abstract [en]

    Grinding, i.e. reducing the particle size of mined ore, is often the bottleneck of the mining concentrating process. Thus, even small improvements may lead to large increases in profit. The goal of the grinding circuit is two-sided; to maximize the throughput of ore, and minimize the resulting particle size of the ground ore within some acceptable range. In this work we study the control of a two-stage grinding circuit using reinforcement learning. To this end, we present a solution for integrating industrial simulation models into the reinforcement learning framework OpenAI Gym. We compare an existing PID controller, based on vast domain knowledge and years of hand-tuning, with a black-box algorithm called Proximal Policy Optimization on a calibrated grinding circuit simulation model. The comparison show that it is possible to control the grinding circuit using reinforcement learning. In addition, contrasting reinforcement learning from the existing PID control, the algorithm is able tomaximize an abstract control goal: maximizing profit as defined by a profit function given by our industrial collaborator. In some operating cases the algorithm is able to control the plant more efficiently compared to existing control.

  • 28.
    Hanqing, Zhang
    et al.
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Wiklund, Krister
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Andersson, Magnus
    Umeå University, Faculty of Science and Technology, Department of Physics.
    A fast and robust circle detection method using isosceles triangles sampling2016In: Pattern Recognition, ISSN 0031-3203, E-ISSN 1873-5142, Vol. 54, p. 218-228Article in journal (Refereed)
    Abstract [en]

    Circle detection using randomized sampling has been developed in recent years to reduce computational intensity. However, randomized sampling is sensitive to noise that can lead to reduced accuracy and false-positive candidates. To improve on the robustness of randomized circle detection under noisy conditions this paper presents a new methodology for circle detection based upon randomized isosceles triangles sampling. It is shown that the geometrical property of isosceles triangles provides a robust criterion to find relevant edge pixels which, in turn, offers an efficient means to estimate the centers and radii of circles. For best efficiency, the estimated results given by the sampling from individual connected components of the edge map were analyzed using a simple clustering approach. To further improve on the accuracy we applied a two-step refinement process using chords and linear error compensation with gradient information of the edge pixels. Extensive experiments using both synthetic and real images have been performed. The results are compared to leading state-of-the-art algorithms and it is shown that the proposed methodology has a number of advantages: it is efficient in finding circles with a low number of iterations, it has high rejection rate of false-positive circle candidates, and it has high robustness against noise. All this makes it adaptive and useful in many vision applications.

  • 29.
    Harisubramanyabalaji, Subramani Palanisamy
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. Scania CV AB, Södertälje, Sweden.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Nyberg, Mattias
    Gustavsson, Joakim
    Improving Image Classification Robustness Using Predictive Data Augmentation2018In: Computer Safety, Reliability, and Security: SAFECOMP 2018 / [ed] Gallina B., Skavhaug A., Schoitsch E., Bitsch F., Springer, 2018, p. 548-561Conference paper (Refereed)
    Abstract [en]

    Safer autonomous navigation might be challenging if there is a failure in sensing system. Robust classifier algorithm irrespective of camera position, view angles, and environmental condition of an autonomous vehicle including different size & type (Car, Bus, Truck, etc.) can safely regulate the vehicle control. As training data play a crucial role in robust classification of traffic signs, an effective augmentation technique enriching the model capacity to withstand variations in urban environment is required. In this paper, a framework to identify model weakness and targeted augmentation methodology is presented. Based on off-line behavior identification, exact limitation of a Convolutional Neural Network (CNN) model is estimated to augment only those challenge levels necessary for improved classifier robustness. Predictive Augmentation (PA) and Predictive Multiple Augmentation (PMA) methods are proposed to adapt the model based on acquired challenges with a high numerical value of confidence. We validated our framework on two different training datasets and with 5 generated test groups containing varying levels of challenge (simple to extreme). The results show impressive improvement by 5-20% in overall classification accuracy thereby keeping their high confidence.

  • 30.
    Hedström, Lucas
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Classifying the rotation of bacteria using neural networks2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Bacteria can quickly spread throughout the human body, making certain diseases hard or impossible to cure. In order to understand how the bacteria can initiate and develop into an infection, microfluidic chambers in a lab environment are used as a template of how bacteria reacts to different types of flows. However, accurately tracking the movement of bacteria is a difficult task, where small objects has to be captured with a high resolution and digitally analysed with computationally heavy methods. Popular imaging methods utilise digital holographic microscopy, where three-dimensional movement is captured in two-dimensional images by numerical reconstruction of the diffraction of light. Since numerical reconstructions become computationally heavy when a good accuracy is required, this master's thesis work focus on evaluating the possibility of using convolutional neural networks to quickly and accurately determine the spatial properties of bacteria. By thorough testing and analysis of state of the art and old networks a new network design is presented, designed to eliminate as many imaging issues as possible. We found that there are certain network design choices that help with reducing the overall error of the system, and with a well chosen training set with sensible augmentations, some networks were able to reach a 60% classification accuracy when determining the vertical rotation of the bacteria. Unfortunately, due to the lack of experimental data where the ground-truth is known, not much experimental testing could be performed. However, a few tests showed that images of high quality could be classified within the expected range of vertical rotation.

  • 31.
    Heith, Anne
    Umeå University, Faculty of Arts, Comparative Literature and Scandinavian Languages.
    Gömda. En sann historia: romantik, spänning, melodram och populärorientalism2006In: Svenskläraren: Tidskrift för svenskundervisning, ISSN 0346-2412, no 4, p. 20-26Article in journal (Other (popular science, discussion, etc.))
  • 32.
    Hellström, Thomas
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Ostovar, Ahmad
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Detection of Trees Based on Quality Guided Image Segmentation2014In: Second International Conference on Robotics and associated High-technologies and Equipment for Agriculture and forestry (RHEA-2014): New trends in mobile robotics, perception and actuation for agriculture and forestry / [ed] Pablo Gonzalez-de-Santos and Angela Ribeiro, RHEA Consortium , 2014, p. 531-540Conference paper (Refereed)
    Abstract [en]

    Detection of objects is crucial for any autonomous field robot orvehicle. Typically, object detection is used to avoid collisions whennavigating, but detection capability is essential also for autonomous or semiautonomousobject manipulation such as automatic gripping of logs withharvester cranes used in forestry. In the EU financed project CROPS,special focus is given to detection of trees, bushes, humans, and rocks inforest environments. In this paper we address the specific problem ofidentifying trees using color images. A presented method combinesalgorithms for seed point generation and segmentation similar to regiongrowing. Both algorithms are tailored by heuristics for the specific task oftree detection. Seed points are generated by scanning a verticallycompressed hue matrix for outliers. Each one of these seed points is thenused to segment the entire image into segments with pixels similar to asmall surrounding around the seed point. All generated segments are refinedby a series of morphological operations, taking into account thepredominantly vertical nature of trees. The refined segments are evaluatedby a heuristically designed quality function. For each seed point, thesegment with the highest quality is selected among all segments that coverthe seed point. The set of all selected segments constitute the identified treeobjects in the image. The method was evaluated with images containing intotal 197 trees, collected in forest environments in northern Sweden. In thispreliminary evaluation, precision in detection was 81% and recall rate 87%.

  • 33.
    Hellström, Thomas
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A software framework for agricultural and forestry robotics2012In: Proceedings of the first International Conference on Robotics and associated High-technologies and Equipment for agriculture: Applications of automated systems and robotics for crop protection in sustainable precision agriculture / [ed] Andrea Peruzzi, Pisa: Pisa University Press , 2012, p. 171-176Conference paper (Refereed)
    Abstract [en]

    In  this  paper  we  describe  on-going  development  of  a  generic software framework for development of agricultural and forestry robots.  The  goal  is  to  provide  generic  high-level  functionality and to encourage distributed and structured programming, thus leading to faster and simplified development of robots. Different aspects  of  the  framework  are  described  using  different architecture views.  We show how these views complement each other  in  a  way  that  supports  development  and  description  of robot software. 

  • 34.
    Hellström, Thomas
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A software framework for agricultural and forestry robots2013In: Industrial robot, ISSN 0143-991X, E-ISSN 1758-5791, Vol. 40, no 1, p. 20-26Article in journal (Refereed)
    Abstract [en]

    Purpose: The purpose of this paper is to describe a generic software framework for development of agricultural and forestry robots. The primary goal is to provide generic high-level functionality and to encourage distributed and structured programming, thus leading to faster and simplified development of robots. A secondary goal is to investigate the value of several architecture views when describing different software aspects of a robotics system.

    Design/methodology/approach: The framework is constructed with a hybrid robot architecture, with a static state machine that implements a flow diagram describing each specific robot. Furthermore, generic modules for GUI, resource management, performance monitoring, and error handling are included. The framework is described with logical, development, process, and physical architecture views.

    Findings: The multiple architecture views provide complementary information that is valuable both during and after the design phase. The framework has been shown to be efficient and time saving when integrating work by several partners in several robotics projects. Although the framework is guided by the specific needs of harvesting agricultural robots, the result is believed to be of general value for development also of other types of robots.

    Originality/value: In this paper, the authors present a novel generic framework for development of agricultural and forestry robots. The robot architecture uses a state machine as replacement for the planner commonly found in other hybrid architectures. The framework is described with multiple architecture views.

  • 35.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Réhman, Shafiq ur
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Embodied tele-presence system (ETS): designing tele-presence for video teleconferencing2014In: Design, user experience, and usability: User experience design for diverse interaction platforms and environments / [ed] Aaron Marcus, Springer International Publishing Switzerland, 2014, Vol. 8518, p. 574-585Conference paper (Refereed)
    Abstract [en]

    In spite of the progress made in tele conferencing over the last decades, however, it is still far from a resolved issue. In this work, we present an intuitive video teleconferencing system, namely - Embodied Tele-Presence System (ETS) which is based on embodied interaction concept. This work proposes the results of a user study considering the hypothesis: “ Embodied interaction based video conferencing system performs better than the standard video conferencing system in representing nonverbal behaviors, thus creating a ‘feeling of presence’ of a remote person among his/her local collaborators”. Our ETS integrates standard audio-video conferencing with mechanical embodiment of head gestures of a remote person (as nonverbal behavior) to enhance the level of interaction. To highlight the technical challenges and design principles behind such tele-presence systems, we have also performed a system evaluation which shows the accuracy and efficiency of our ETS design. The paper further provides an overview of our case study and an analysis of our user evaluation. The user study shows that the proposed embodied interaction approach in video teleconferencing increases ‘in-meeting interaction’ and enhance a ‘feeling of presence’ among remote participant and his collaborators.

  • 36.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Gaze perception and awareness in smart devices2016In: International journal of human-computer studies, ISSN 1071-5819, E-ISSN 1095-9300, Vol. 92-93, p. 55-65Article in journal (Refereed)
    Abstract [en]

    Eye contact and gaze awareness play a significant role for conveying emotions and intentions duringface-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However,the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the(i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen isunable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers haveproposed different hardware setups with complex software algorithms. The most recent solution foraccurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, todaycommonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is aneed to improve gaze awareness/perception in these smart devices. In this work, we have revisited thequestion; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis isthat ‘an accurate gaze perception can be achieved by the ‘3D embodiment’ of a remote user's head gestureduring video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3Dembodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smartdevice (tablet PC). The electromechanical platform in combination with a smart device is a novel setupthat is used for studying gaze awareness/perception in 2D screen-based smart devices during videoteleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-LisaGaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii)‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between theobserving person and the object by an actor. Our results confirm that the 3D embodiment of a remoteuser head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness,hence, accurately projecting the human gaze in distant space.

  • 37.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH Royal Institute of Technology.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Tele-Immersion: Virtual Reality based Collaboration2016In: HCI International 2016: Posters' Extended Abstracts : 18th International Conference, HCI International 2016, Toronto, Canada, July 17-22, 2016, Proceedings, Part I / [ed] Constantine Stephanidis, Springer, 2016, p. 352-357Conference paper (Refereed)
    Abstract [en]

    The 'perception of being present in another space' duringvideo teleconferencing is a challenging task. This work makes an effortto improve upon a user perception of being 'present' in another space byemploying a virtual reality (VR) headset and an embodied telepresencesystem (ETS). In our application scenario, a remote participant usesa VR headset to collaborate with local collaborators. At a local site,an ETS is used as a physical representation of the remote participantamong his/her local collaborators. The head movements of the remoteperson is mapped and presented by the ETS along with audio-video com-munication. Key considerations of complete design are discussed, wheresolutions to challenges related to head tracking, audio-video communi-cation and data communication are presented. The proposed approachis validated by the user study where quantitative analysis is done onimmersion and presence parameters.

  • 38.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Telepresence Mechatronic Robot (TEBoT): Towards the design and control of socially interactive bio-inspired system2016In: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967, Vol. 31, no 5, p. 2597-2610Article in journal (Refereed)
    Abstract [en]

    Socially interactive systems are embodied agents that engage in social interactions with humans. From a design perspective, these systems are built by considering a biologically inspired design (Bio-inspired) that can mimic and simulate human-like communication cues and gestures. The design of a bio-inspired system usually consists of (i) studying biological characteristics, (ii) designing a similar biological robot, and (iii) motion planning, that can mimic the biological counterpart. In this article, we present a design, development, control-strategy and verification of our socially interactive bio-inspired robot, namely - Telepresence Mechatronic Robot (TEBoT). The key contribution of our work is an embodiment of a real human-neck movements by, i) designing a mechatronic platform based on the dynamics of a real human neck and ii) capturing the real head movements through our novel single-camera based vision algorithm. Our socially interactive bio-inspired system is based on an intuitive integration-design strategy that combines computer vision based geometric head pose estimation algorithm, model based design (MBD) approach and real-time motion planning techniques. We have conducted an extensive testing to demonstrate effectiveness and robustness of our proposed system.

  • 39.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Distance Communication: Trends and Challenges and How to Resolve them2014In: Strategies for a creative future with computer science, quality design and communicability / [ed] Francisco V. C. Ficarra, Kim Veltman, Kaoru Sumi, Jacqueline Alma, Mary Brie, Miguel C. Ficarra, Domen Verber, Bojan Novak, and Andreas Kratky, Italy: Blue Herons Editions , 2014Chapter in book (Refereed)
    Abstract [en]

    Distance communication is becoming an important part of our lives because of the current advancement in computer mediated communication (CMC). Despite the current advancement in CMC especially video teleconferencing; it is still far from face-to-face (FtF) interaction. This study will focus on the advancements in video teleconferencing; their trends and challenges. Furthermore, this work will present an overview of previously developed hardware and software techniques to improve the video teleconferencing experience. After discussing the background development of video teleconferencing, we will propose an intuitive solution to improve the video teleconferencing experience. To support the proposed solution, the Embodied Interaction based distance communication framework is developed. The effectiveness of this framework is validated by the user studies. To summarize this work has considered the following questions: What are the factors which make video teleconferencing different from face to face interaction?; What researchers have done so far to improve video teleconferencing?; How to further improve the teleconferencing experience?; How to add more non-verbal modalities to enhance the video teleconferencing experience? At the end we have also provided the future directions for embodied interaction based video teleconferencing.

  • 40.
    Khan, Muhammad Sikandar Lal
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    La Hera, Pedro
    Liu, Feng
    Li, Haibo
    A pilot user's prospective in mobile robotic telepresence system2014In: 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA 2014), IEEE, 2014Conference paper (Refereed)
    Abstract [en]

    In this work we present an interactive video conferencing system specifically designed for enhancing the experience of video teleconferencing for a pilot user. We have used an Embodied Telepresence System (ETS) which was previously designed to enhance the experience of video teleconferencing for the collaborators. In this work we have deployed an ETS in a novel scenario to improve the experience of pilot user during distance communication. The ETS is used to adjust the view of the pilot user at the distance location (e.g. distance located conference/meeting). The velocity profile control for the ETS is developed which is implicitly controlled by the head of the pilot user. The experiment was conducted to test whether the view adjustment capability of an ETS increases the collaboration experience of video conferencing for the pilot user or not. The user study was conducted in which participants (pilot users) performed interaction using ETS and with traditional computer based video conferencing tool. Overall, the user study suggests the effectiveness of our approach and hence results in enhancing the experience of video conferencing for the pilot user.

  • 41.
    Kozlov, Alex
    et al.
    Space Applications Services, Zaventem, Belgium.
    Gancet, Jeremi
    Space Applications Services, Zaventem, Belgium.
    Letier, Pierre
    Space Applications Services, Zaventem, Belgium.
    Schillaci, Guido
    Humboldt University Berlin, Berlin, Germany.
    Hafner, Verena
    Humboldt University Berlin, Berlin, Germany.
    Fonooni, Benjamin
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Nevatia, Yashodhan
    Space Applications Services, Zaventem, Belgium.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Development of a Search and Rescue field robotic assistant2013In: 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) / [ed] IEEE, IEEE, 2013Conference paper (Refereed)
    Abstract [en]

    The work introduced in this paper was performed as part of the FP7 INTRO (Marie-Curie ITN) project. We describe the activities undertaken towards the development of a field robotic assistant for a Search and Rescue application. We specifically target a rubble clearing task, where the robot will ferry small pieces of rubble between two waypoints assigned to it by the human. The aim is to complement a human worker with a robotic assistant for this task, while maintaining a comparable level of speed and efficiency in the task execution. Towards this end we develop/integrate software capabilities in mobile navigation, arm manipulation and high level tasks sequences learning. Early outdoor experiments carried out in a quarry are furthermore introduced.

  • 42.
    Lennartsson, Louise
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Photogrammetric methods for calculating the dimensions of cuboids from images2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    There are situations where you would like to know the size of an object but do not have a ruler nearby. However, it is likely that you are carrying a smartphone that has an integrated digital camera, so imagine if you could snap a photo of the object to get a size estimation. Different methods for finding the dimensions of a cuboid from a photography are evaluated in this project. A simple Android application implementing these methods has also been created.

    To be able to perform measurements of objects in images we need to know how the scene is reproduced by the camera. This depends on the traits of the camera, called the intrinsic parameters. These parameters are unknown unless a camera calibration is performed, which is a non-trivial task. Because of this eight smartphone cameras, of different models, were calibrated in search of similarities that could give ground for generalisations.

    To be able to determine the size of the cuboid the scale needs to be known, which is why a reference object is used. In this project a credit card is used as reference, which is placed on top of the cuboid. The four corners of the reference as well as four corners of the cuboid are used to determine the dimensions of the cuboid. Two methods, one dependent and one independent of the intrinsic parameters, are used to find the width and length, i.e. the sizes of the two dimensions that share a plane with the reference. These results are then used in another two methods to find the height of the cuboid. Errors were purposely introduced to the corners to investigate the performance of the different methods.

    The results show that the different methods perform very well and are all equally suitable for this type of problem. They also show that having correct reference corners is more important than having correct object corners as the results were highly dependent on the accuracy of the reference corners. Another conclusion is that the camera calibration is not necessary because different approximations of the intrinsic parameters can be used instead.

  • 43.
    Li, Bo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Pushing edge detection to the limit: towards building semantic features for human emotion recognition2013Licentiate thesis, comprehensive summary (Other academic)
  • 44.
    Li, Bo
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Jevtic, Aleksandar
    Robosoft,France.
    Söderström, Ulrik
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Fast edge detection by center of mass2013In: The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013 (ICISIP2013), Kitakyushu, Japan, 2013, p. 103-110Conference paper (Refereed)
    Abstract [en]

    In this paper, a novel edge detection method that computes image gradient using the concept of Center of Mass (COM) is presented. The algorithm runs with a constant number of operations per pixel independently from its scale by using integral image. Compared with the conventional convolutional edge detector such as Sobel edge detector, the proposed method performs faster when region size is larger than 9×9. The proposed method can be used as framework for multi-scale edge detectors when the goal is to achieve fast performance. Experimental results show that edge detection by COM is competent with Canny edge detection.

  • 45.
    Li, Bo
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Söderström, Ulrik
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Restricted Hysteresis Reduce Redundancy in Edge Detection2013In: Journal of Signal and Information Processing, ISSN 2159-4465, E-ISSN 2159-4481, Vol. 4, no 3B, p. 158-163Article in journal (Refereed)
    Abstract [en]

    In edge detection algorithms, there is a common redundancy problem, especially when the gradient direction is close to -135°, -45°, 45°, and 135°. Double edge effect appears on the edges around these directions. This is caused by the discrete calculation of non-maximum suppression. Many algorithms use edge points as feature for further task such as line extraction, curve detection, matching and recognition. Redundancy is a very important factor of algorithm speed and accuracy. We find that most edge detection algorithms have redundancy of 50% in the worst case and 0% in the best case depending on the edge direction distribution. The common redundancy rate on natural images is approximately between 15% and 20%. Based on Canny’s framework, we propose a restriction in the hysteresis step. Our experiment shows that proposed restricted hysteresis reduce the redundancy successfully.

  • 46.
    Li, Bo
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Söderström, Ulrik
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    KTH.
    Independent Thresholds on Multi-scale Gradient Images2013In: The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013 (ICISIP2013), Kitakyushu, Japan, 2013, p. 124-131Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a multi-scale edge detection algorithm based on proportional scale summing. Our analysis shows that proportional scale summing successfully improves edge detection rate by applying independent thresholds on multi-scale gradient images. The proposed method improves edge detection and localization by summing gradient images with a proportional parameter cn (c < 1); which ensures that the detected edges are as close as possible to the fine scale. We employ non-maxima suppression and thinning step similar to Canny edge detection framework on the summed gradient images. The proposed method can detect edges successfully and experimental results show that it leads to better edge detection performance than Canny edge detector and scale multiplication edge detector.

  • 47.
    Li, Liu
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Lindahl, Olof
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Vibrotactile chair: A social interface for blind2006In: Proceedings SSBA 2006: Symposium on image analysis, Umeå, March 16-17, 2006 / [ed] Fredrik Georgsson, 1971-, Niclas Börlin, 1968-, Umeå: Umeå universitet. Institutionen för datavetenskap , 2006, p. 117-120Conference paper (Other academic)
    Abstract [en]

    In this paper we present our vibrotactile chair, a social interface for the blind. With this chair the blind can get on-line emotion information from the person he / she is heading to. This greatly enhances communication ability and improve the quality of social life of the blind. In the paper we are discussing technical challenges and design principles behind the chair, and introduce the experimental platform: tactile facial expression appearance recognition system (TEARS)TM".

  • 48.
    Lindroos, Ola
    et al.
    SLU.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Pedro, La Hera
    SLU.
    Hohnloser, Peter
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Estimating the position of the harvester head: a key step towards the precision forestry of the future?2015In: Croatian Journal of Forest Engineering, ISSN 1845-5719, E-ISSN 1848-9672, Vol. 36, no 2, p. 147-164Article in journal (Refereed)
    Abstract [en]

    Modern harvesters are technologically sophisticated, with many useful features such as the ability to automatically measure stem diameters and lengths. This information is processed in real time to support value optimization when cutting stems into logs. It can also be transferred from the harvesters to centralized systems and used for wood supply management. Such information management systems have been available since the 1990s in Sweden and Finland, and are constantly being upgraded. However, data on the position of the harvester head relative to the machine are generally not recorded during harvesting. The routine acquisition and analysis of such data could offer several opportunities to improve forestry operations and related processes in the future. Here, we analyze the possible benefits of having this information, as well as the steps required to collect and process it. The benefits and drawbacks of different sensing technologies are discussed in terms of potential applications, accuracy and cost. We also present the results of preliminary testing using two of the proposed methods. Our analysis indicates that an improved scope for mapping and controlling machine movement is the main benefit that is directly related to the conduct of forestry operations. In addition, there are important indirect benefits relating to ecological mapping. Our analysis suggests that both of these benefits can be realized by measuring the angles of crane joints or the locations of crane segments and using the resulting information to compute the head's position. In keeping with our findings, two companies have recently introduced sensor equipped crane solutions.

  • 49.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. SIAT, Chinese Academy of Science, China.
    ur Rehman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. SIAT, Chinese Academy of Science, China.
    Multi-Gesture based Football Game in Smart Phones2013In: SA '13 SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications, NY, USA: Association for Computing Machinery (ACM), 2013Conference paper (Refereed)
  • 50.
    Lu, Zhihan
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    ur Réhman, Shafiq
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Khan, Muhammad Sikandar Lal
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Li, Haibo
    Royal Institute of Technology (KTH), Sweden..
    Anaglyph 3D stereoscopic visualization of 2D video based on fundamental matrix2013In: Proceedings of 2013 International Conferenceon Virtual Reality and Visualization (ICVRV 2013), 14-15 September 2013, Xi'an, Shaanxi, China, 2013Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a simple Anaglyph 3Dstereo generation algorithm from 2D video sequence with monocularcamera. In our novel approach we employ camera poseestimation method to directly generate stereoscopic 3D from 2Dvideo without building depth map explicitly. Our cost effectivemethod is suitable for arbitrary real-world video sequence andproduces smooth results. We use image stitching based on planecorrespondence using fundamental matrix. To this end we alsodemonstrate that correspondence plane image stitching based onHomography matrix only cannot generate better result. Furthermore,we utilize the structure from motion (with fundamentalmatrix) based reconstructed camera pose model to accomplishvisual anaglyph 3D illusion. The proposed approach demonstratesa very good performance for most of the video sequences.

12 1 - 50 of 79
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf