umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Detection and classification of Root and Butt-Rot (RBR) in Stumps of Norway Spruce Using RGB Images and Machine Learning
Umeå University, Faculty of Science and Technology, Department of Computing Science. Division of Forestry and Forest Resources, Norwegian Institute of Bioeconomy Research (NIBIO), P.O. Box 115, 1431 Ås, Norway.ORCID iD: 0000-0003-0830-5303
Show others and affiliations
2019 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 7, article id 1579Article in journal (Refereed) Published
Abstract [en]

Root and butt-rot (RBR) has a significant impact on both the material and economic outcome of timber harvesting, and therewith on the individual forest owner and collectively on the forest and wood processing industries. An accurate recording of the presence of RBR during timber harvesting would enable a mapping of the location and extent of the problem, providing a basis for evaluating spread in a climate anticipated to enhance pathogenic growth in the future. Therefore, a system to automatically identify and detect the presence of RBR would constitute an important contribution to addressing the problem without increasing workload complexity for the machine operator. In this study, we developed and evaluated an approach based on RGB images to automatically detect tree stumps and classify them as to the absence or presence of rot. Furthermore, since knowledge of the extent of RBR is valuable in categorizing logs, we also classify stumps into three classes of infestation; rot = 0%, 0% < rot > 50% and rot ≥ 50%. In this work we used deep-learning approaches and conventional machine-learning algorithms for detection and classification tasks. The results showed that tree stumps were detected with precision rate of 95% and recall of 80%. Using only the correct output (TP) of the stump detector, stumps without and with RBR were correctly classified with accuracy of 83.5% and 77.5%, respectively. Classifying rot into three classes resulted in 79.4%, 72.4%, and 74.1% accuracy for stumps with rot = 0%, 0% < rot > 50% and rot ≥ 50%, respectively. With some modifications, the developed algorithm could be used either during the harvesting operation to detect RBR regions on the tree stumps or as an RBR detector for post-harvest assessment of tree stumps and logs.

Place, publisher, year, edition, pages
MDPI, 2019. Vol. 19, no 7, article id 1579
Keywords [en]
deep learning; forest harvesting; tree stumps; automatic detection and classification
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Analysis
Identifiers
URN: urn:nbn:se:umu:diva-157716DOI: 10.3390/s19071579ISI: 000465570700098PubMedID: 30939827OAI: oai:DiVA.org:umu-157716DiVA, id: diva2:1301359
Projects
PRECISION
Funder
The Research Council of Norway, NFR281140Available from: 2019-04-01 Created: 2019-04-01 Last updated: 2019-11-11Bibliographically approved
In thesis
1. Object Detection and Recognition in Unstructured Outdoor Environments
Open this publication in new window or tab >>Object Detection and Recognition in Unstructured Outdoor Environments
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Computer vision and machine learning based systems are often developed to replace humans in harsh, dangerous, or tedious situations, as well as to reduce the required time to accomplish a task. Another goal is to increase performance by introducing automation to tasks such as inspections in manufacturing applications, sorting timber during harvesting, surveillance, fruit grading, yield prediction, and harvesting operations.Depending on the task, a variety of object detection and recognition algorithms can be applied, including both conventional and deep learning based approaches. Moreover, within the process of developing image analysis algorithms, it is essential to consider environmental challenges, e.g. illumination changes, occlusion, shadows, and divergence in colour, shape, texture, and size of objects.

The goal of this thesis is to address these challenges to support development of autonomous agricultural and forestry systems with enhanced performance and reduced need for human involvement.This thesis provides algorithms and techniques based on adaptive image segmentation for tree detection in forest environment and also yellow pepper recognition in greenhouses. For segmentation, seed point generation and a region growing method was used to detect trees. An algorithm based on reinforcement learning was developed to detect yellow peppers. RGB and depth data was integrated and used in classifiers to detect trees, bushes, stones, and humans in forest environments. Another part of the thesis describe deep learning based approaches to detect stumps and classify the level of rot based on images.

Another major contribution of this thesis is a method using infrared images to detect humans in forest environments. To detect humans, one shape-dependent and one shape-independent method were proposed.

Algorithms to recognize the intention of humans based on hand gestures were also developed. 3D hand gestures were recognized by first detecting and tracking hands in a sequence of depth images, and then utilizing optical flow constraint equations.

The thesis also presents methods to answer human queries about objects and their spatial relation in images. The solution was developed by merging a deep learning based method for object detection and recognition with natural language processing techniques.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2019. p. 88
Series
Report / UMINF, ISSN 0348-0542 ; 19.08
Keywords
Computer vision, Deep Learning, Harvesting Robots, Automatic Detection and Recognition
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-165069 (URN)978-91-7855-147-7 (ISBN)
Public defence
2019-12-05, MA121, MIT Building, Umeå, 13:00 (English)
Opponent
Supervisors
Available from: 2019-11-14 Created: 2019-11-08 Last updated: 2019-11-12Bibliographically approved

Open Access in DiVA

fulltext(12726 kB)67 downloads
File information
File name FULLTEXT01.pdfFile size 12726 kBChecksum SHA-512
ef253ade8b81efeee773c78e8d6b78506fe8c1b2069c55b2690986cdc26706c465c11147a22efdda8b57e923b2d5a0150b0aa056c3f787ca5ad0aa4634278a6e
Type fulltextMimetype application/pdf

Other links

Publisher's full textPubMed

Authority records BETA

Ostovar, AhmadRingdahl, Ola

Search in DiVA

By author/editor
Ostovar, AhmadRingdahl, Ola
By organisation
Department of Computing Science
In the same journal
Sensors
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 67 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 215 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf