umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0003-0830-5303
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0002-4600-8652
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0001-7242-2200
2018 (Engelska)Ingår i: Robotics, E-ISSN 2218-6581, Vol. 7, nr 1, artikel-id 11Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires accurate and stable fruit detection based on video images. To segment an image into background and foreground, thresholding techniques are commonly used. The varying illumination conditions in the unstructured greenhouse environment often cause shadows and overexposure. Furthermore, the color of the fruits to be harvested varies over the season. All this makes it sub-optimal to use fixed pre-selected thresholds. In this paper we suggest an adaptive image-dependent thresholding method. A variant of reinforcement learning (RL) is used with a reward function that computes the similarity between the segmented image and the labeled image to give feedback for action selection. The RL-based approach requires less computational resources than exhaustive search, which is used as a benchmark, and results in higher performance compared to a Lipschitzian based optimization approach. The proposed method also requires fewer labeled images compared to other methods. Several exploration-exploitation strategies are compared, and the results indicate that the Decaying Epsilon-Greedy algorithm gives highest performance for this task. The highest performance with the Epsilon-Greedy algorithm ( ϵ = 0.7) reached 87% of the performance achieved by exhaustive search, with 50% fewer iterations than the benchmark. The performance increased to 91.5% using Decaying Epsilon-Greedy algorithm, with 73% less number of iterations than the benchmark.

Ort, förlag, år, upplaga, sidor
MDPI , 2018. Vol. 7, nr 1, artikel-id 11
Nyckelord [en]
reinforcement learning, Q-Learning, image thresholding, ϵ-greedy strategies
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
datoriserad bildanalys
Identifikatorer
URN: urn:nbn:se:umu:diva-144513DOI: 10.3390/robotics7010011ISI: 000432680200008OAI: oai:DiVA.org:umu-144513DiVA, id: diva2:1180297
Forskningsfinansiär
EU, Horisont 2020, 644313Tillgänglig från: 2018-02-05 Skapad: 2018-02-05 Senast uppdaterad: 2019-11-11Bibliografiskt granskad
Ingår i avhandling
1. Object Detection and Recognition in Unstructured Outdoor Environments
Öppna denna publikation i ny flik eller fönster >>Object Detection and Recognition in Unstructured Outdoor Environments
2019 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Computer vision and machine learning based systems are often developed to replace humans in harsh, dangerous, or tedious situations, as well as to reduce the required time to accomplish a task. Another goal is to increase performance by introducing automation to tasks such as inspections in manufacturing applications, sorting timber during harvesting, surveillance, fruit grading, yield prediction, and harvesting operations.Depending on the task, a variety of object detection and recognition algorithms can be applied, including both conventional and deep learning based approaches. Moreover, within the process of developing image analysis algorithms, it is essential to consider environmental challenges, e.g. illumination changes, occlusion, shadows, and divergence in colour, shape, texture, and size of objects.

The goal of this thesis is to address these challenges to support development of autonomous agricultural and forestry systems with enhanced performance and reduced need for human involvement.This thesis provides algorithms and techniques based on adaptive image segmentation for tree detection in forest environment and also yellow pepper recognition in greenhouses. For segmentation, seed point generation and a region growing method was used to detect trees. An algorithm based on reinforcement learning was developed to detect yellow peppers. RGB and depth data was integrated and used in classifiers to detect trees, bushes, stones, and humans in forest environments. Another part of the thesis describe deep learning based approaches to detect stumps and classify the level of rot based on images.

Another major contribution of this thesis is a method using infrared images to detect humans in forest environments. To detect humans, one shape-dependent and one shape-independent method were proposed.

Algorithms to recognize the intention of humans based on hand gestures were also developed. 3D hand gestures were recognized by first detecting and tracking hands in a sequence of depth images, and then utilizing optical flow constraint equations.

The thesis also presents methods to answer human queries about objects and their spatial relation in images. The solution was developed by merging a deep learning based method for object detection and recognition with natural language processing techniques.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå University, 2019. s. 88
Serie
Report / UMINF, ISSN 0348-0542 ; 19.08
Nyckelord
Computer vision, Deep Learning, Harvesting Robots, Automatic Detection and Recognition
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-165069 (URN)978-91-7855-147-7 (ISBN)
Disputation
2019-12-05, MA121, MIT Building, Umeå, 13:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2019-11-14 Skapad: 2019-11-08 Senast uppdaterad: 2019-11-12Bibliografiskt granskad

Open Access i DiVA

fulltext(1945 kB)161 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 1945 kBChecksumma SHA-512
42909c300aad41d81179d54906dedfffd3c25879da26b785402119245f64b94f366a7fabbf23919cb24687809d999e5e39d3527f0299fa8999bb2953d699497f
Typ fulltextMimetyp application/pdf

Övriga länkar

Förlagets fulltext

Personposter BETA

Ostovar, AhmadRingdahl, OlaHellström, Thomas

Sök vidare i DiVA

Av författaren/redaktören
Ostovar, AhmadRingdahl, OlaHellström, Thomas
Av organisationen
Institutionen för datavetenskap
I samma tidskrift
Robotics
Datorseende och robotik (autonoma system)

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 161 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 916 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf