umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Detection of Trees Based on Quality Guided Image Segmentation
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Robotics)
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Robotics)
2014 (English)In: Second International Conference on Robotics and associated High-technologies and Equipment for Agriculture and forestry (RHEA-2014): New trends in mobile robotics, perception and actuation for agriculture and forestry / [ed] Pablo Gonzalez-de-Santos and Angela Ribeiro, RHEA Consortium , 2014, p. 531-540Conference paper, Published paper (Refereed)
Abstract [en]

Detection of objects is crucial for any autonomous field robot orvehicle. Typically, object detection is used to avoid collisions whennavigating, but detection capability is essential also for autonomous or semiautonomousobject manipulation such as automatic gripping of logs withharvester cranes used in forestry. In the EU financed project CROPS,special focus is given to detection of trees, bushes, humans, and rocks inforest environments. In this paper we address the specific problem ofidentifying trees using color images. A presented method combinesalgorithms for seed point generation and segmentation similar to regiongrowing. Both algorithms are tailored by heuristics for the specific task oftree detection. Seed points are generated by scanning a verticallycompressed hue matrix for outliers. Each one of these seed points is thenused to segment the entire image into segments with pixels similar to asmall surrounding around the seed point. All generated segments are refinedby a series of morphological operations, taking into account thepredominantly vertical nature of trees. The refined segments are evaluatedby a heuristically designed quality function. For each seed point, thesegment with the highest quality is selected among all segments that coverthe seed point. The set of all selected segments constitute the identified treeobjects in the image. The method was evaluated with images containing intotal 197 trees, collected in forest environments in northern Sweden. In thispreliminary evaluation, precision in detection was 81% and recall rate 87%.

Place, publisher, year, edition, pages
RHEA Consortium , 2014. p. 531-540
Keywords [en]
Seed point, Image segmentation, Region growing
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:umu:diva-93290ISBN: 978-84-697-0248-2 (print)OAI: oai:DiVA.org:umu-93290DiVA, id: diva2:747118
Conference
Second International Conference on Robotics and associated High-technologies and Equipment for Agriculture and forestry (RHEA-2014)
Funder
EU, FP7, Seventh Framework Programme, 246252Available from: 2014-09-15 Created: 2014-09-15 Last updated: 2019-11-11Bibliographically approved
In thesis
1. Object Detection and Recognition in Unstructured Outdoor Environments
Open this publication in new window or tab >>Object Detection and Recognition in Unstructured Outdoor Environments
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Computer vision and machine learning based systems are often developed to replace humans in harsh, dangerous, or tedious situations, as well as to reduce the required time to accomplish a task. Another goal is to increase performance by introducing automation to tasks such as inspections in manufacturing applications, sorting timber during harvesting, surveillance, fruit grading, yield prediction, and harvesting operations.Depending on the task, a variety of object detection and recognition algorithms can be applied, including both conventional and deep learning based approaches. Moreover, within the process of developing image analysis algorithms, it is essential to consider environmental challenges, e.g. illumination changes, occlusion, shadows, and divergence in colour, shape, texture, and size of objects.

The goal of this thesis is to address these challenges to support development of autonomous agricultural and forestry systems with enhanced performance and reduced need for human involvement.This thesis provides algorithms and techniques based on adaptive image segmentation for tree detection in forest environment and also yellow pepper recognition in greenhouses. For segmentation, seed point generation and a region growing method was used to detect trees. An algorithm based on reinforcement learning was developed to detect yellow peppers. RGB and depth data was integrated and used in classifiers to detect trees, bushes, stones, and humans in forest environments. Another part of the thesis describe deep learning based approaches to detect stumps and classify the level of rot based on images.

Another major contribution of this thesis is a method using infrared images to detect humans in forest environments. To detect humans, one shape-dependent and one shape-independent method were proposed.

Algorithms to recognize the intention of humans based on hand gestures were also developed. 3D hand gestures were recognized by first detecting and tracking hands in a sequence of depth images, and then utilizing optical flow constraint equations.

The thesis also presents methods to answer human queries about objects and their spatial relation in images. The solution was developed by merging a deep learning based method for object detection and recognition with natural language processing techniques.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2019. p. 88
Series
Report / UMINF, ISSN 0348-0542 ; 19.08
Keywords
Computer vision, Deep Learning, Harvesting Robots, Automatic Detection and Recognition
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-165069 (URN)978-91-7855-147-7 (ISBN)
Public defence
2019-12-05, MA121, MIT Building, Umeå, 13:00 (English)
Opponent
Supervisors
Available from: 2019-11-14 Created: 2019-11-08 Last updated: 2019-11-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

URL

Authority records BETA

Hellström, ThomasOstovar, Ahmad

Search in DiVA

By author/editor
Hellström, ThomasOstovar, Ahmad
By organisation
Department of Computing Science
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1035 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf