umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Integrating kinect depth data with a stochastic object classification framework for forestry robots
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
2012 (Engelska)Ingår i: Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics: Volume 2, SciTePress , 2012, s. 314-320Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Ort, förlag, år, upplaga, sidor
SciTePress , 2012. s. 314-320
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
URN: urn:nbn:se:umu:diva-71443OAI: oai:DiVA.org:umu-71443DiVA, id: diva2:624016
Konferens
9th International Conference on Informatics in Control, Automation and Robotics, 28-31 July 2012, Rome, Italy
Tillgänglig från: 2013-05-29 Skapad: 2013-05-29 Senast uppdaterad: 2019-11-11Bibliografiskt granskad
Ingår i avhandling
1. Object Classification and Image Labeling using RGB-Depth Information
Öppna denna publikation i ny flik eller fönster >>Object Classification and Image Labeling using RGB-Depth Information
2013 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Alternativ titel[sv]
Klassificering av föremål och bildmärkning med hjälp av RGB-djup-information
Abstract [en]

This thesis is part of research for the vision systems of four robots in the EU funded project, CROPS[l], where the robots should harvest apples, sweet peppers and grapes, and explore forests. The whole process of designing such a system, including the software architecture, creation of the image database, image labeling and object detection, is presented.

The software architecture chapter of this thesis provides a review of some of the latest frameworks for robotics. It describes the structure of robetics components and the communication systems between them. The vision system is a subsystem of the robetics architecture studied in conjunction with other components of the robetics system. To build a vision system, three main steps should be taken. First a collection of images that are similar to what the robots are expected to meet should be created. Second, all the images should be labeled manually or automatically. Finally, learning systems should use the labeled images to build object medels. Details about these steps make up the majority of the content in this thesis. With new widely available low-cost sensors such as Microsoft Kinect, it is possible to use depth imagesalong with RGB images to increase the performance of vision systems. We particularly focus on various methods that integrate depth information in the three steps that are mentioned for building a vision system. More specifically, the image labeling tools help to extract objects in images to be used as ground truth for learning and testing processes in object detection. The inputs for such tools are usually RGB images. Despite the existence of many powerful tools for image labeling, there is still a need for RGB-Depth adapted tools. We present a new interactive labeling tool that partially automates image labeling, with two major contributions. First, the method extends the concept of image segmentatian from RGB to RGB-Depth. Second, it minimizes the interadion time needed for object extraction by using a highly efficient segmentatian method in RGB-Depth space. The ent~re procedure requires very few elieks campared to other already existing tools. In fact, when the desired object is the dosest object to the camera, as is the case in our forestry application, no click is required to extract the object. Finally, while we present state of the art in object detection with 2D environments, object detection using RGB-depth information is mainly addressed for future work.

Ort, förlag, år, upplaga, sidor
Umeå: Department of Computing Science, Umeå University, 2013. s. 62
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:umu:diva-71477 (URN)978-91-7459-657-1 (ISBN)978-91-7459-658-8 (ISBN)
Presentation
2013-05-06, MIT-building, MC313, Umeå University, Umeå, 13:00 (Engelska)
Handledare
Tillgänglig från: 2013-05-30 Skapad: 2013-05-30 Senast uppdaterad: 2018-06-08Bibliografiskt granskad
2. Object Detection and Recognition in Unstructured Outdoor Environments
Öppna denna publikation i ny flik eller fönster >>Object Detection and Recognition in Unstructured Outdoor Environments
2019 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Computer vision and machine learning based systems are often developed to replace humans in harsh, dangerous, or tedious situations, as well as to reduce the required time to accomplish a task. Another goal is to increase performance by introducing automation to tasks such as inspections in manufacturing applications, sorting timber during harvesting, surveillance, fruit grading, yield prediction, and harvesting operations.Depending on the task, a variety of object detection and recognition algorithms can be applied, including both conventional and deep learning based approaches. Moreover, within the process of developing image analysis algorithms, it is essential to consider environmental challenges, e.g. illumination changes, occlusion, shadows, and divergence in colour, shape, texture, and size of objects.

The goal of this thesis is to address these challenges to support development of autonomous agricultural and forestry systems with enhanced performance and reduced need for human involvement.This thesis provides algorithms and techniques based on adaptive image segmentation for tree detection in forest environment and also yellow pepper recognition in greenhouses. For segmentation, seed point generation and a region growing method was used to detect trees. An algorithm based on reinforcement learning was developed to detect yellow peppers. RGB and depth data was integrated and used in classifiers to detect trees, bushes, stones, and humans in forest environments. Another part of the thesis describe deep learning based approaches to detect stumps and classify the level of rot based on images.

Another major contribution of this thesis is a method using infrared images to detect humans in forest environments. To detect humans, one shape-dependent and one shape-independent method were proposed.

Algorithms to recognize the intention of humans based on hand gestures were also developed. 3D hand gestures were recognized by first detecting and tracking hands in a sequence of depth images, and then utilizing optical flow constraint equations.

The thesis also presents methods to answer human queries about objects and their spatial relation in images. The solution was developed by merging a deep learning based method for object detection and recognition with natural language processing techniques.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå University, 2019. s. 88
Serie
Report / UMINF, ISSN 0348-0542 ; 19.08
Nyckelord
Computer vision, Deep Learning, Harvesting Robots, Automatic Detection and Recognition
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-165069 (URN)978-91-7855-147-7 (ISBN)
Disputation
2019-12-05, MA121, MIT Building, Umeå, 13:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2019-11-14 Skapad: 2019-11-08 Senast uppdaterad: 2019-11-12Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Personposter BETA

Pordel, MostafaHellström, ThomasOstovar, Ahmad

Sök vidare i DiVA

Av författaren/redaktören
Pordel, MostafaHellström, ThomasOstovar, Ahmad
Av organisationen
Institutionen för datavetenskap
Robotteknik och automation

Sök vidare utanför DiVA

GoogleGoogle Scholar

urn-nbn

Altmetricpoäng

urn-nbn
Totalt: 316 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf