umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Integrating kinect depth data with a stochastic object classification framework for forestry robots
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
2012 (Engelska)Ingår i: Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics: Volume 2, SciTePress , 2012, s. 314-320Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Ort, förlag, år, upplaga, sidor
SciTePress , 2012. s. 314-320
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
URN: urn:nbn:se:umu:diva-71443OAI: oai:DiVA.org:umu-71443DiVA, id: diva2:624016
Konferens
9th International Conference on Informatics in Control, Automation and Robotics, 28-31 July 2012, Rome, Italy
Tillgänglig från: 2013-05-29 Skapad: 2013-05-29 Senast uppdaterad: 2019-11-11Bibliografiskt granskad
Ingår i avhandling
1. Object Classification and Image Labeling using RGB-Depth Information
Öppna denna publikation i ny flik eller fönster >>Object Classification and Image Labeling using RGB-Depth Information
2013 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Alternativ titel[sv]
Klassificering av föremål och bildmärkning med hjälp av RGB-djup-information
Abstract [en]

This thesis is part of research for the vision systems of four robots in the EU funded project, CROPS[l], where the robots should harvest apples, sweet peppers and grapes, and explore forests. The whole process of designing such a system, including the software architecture, creation of the image database, image labeling and object detection, is presented.

The software architecture chapter of this thesis provides a review of some of the latest frameworks for robotics. It describes the structure of robetics components and the communication systems between them. The vision system is a subsystem of the robetics architecture studied in conjunction with other components of the robetics system. To build a vision system, three main steps should be taken. First a collection of images that are similar to what the robots are expected to meet should be created. Second, all the images should be labeled manually or automatically. Finally, learning systems should use the labeled images to build object medels. Details about these steps make up the majority of the content in this thesis. With new widely available low-cost sensors such as Microsoft Kinect, it is possible to use depth imagesalong with RGB images to increase the performance of vision systems. We particularly focus on various methods that integrate depth information in the three steps that are mentioned for building a vision system. More specifically, the image labeling tools help to extract objects in images to be used as ground truth for learning and testing processes in object detection. The inputs for such tools are usually RGB images. Despite the existence of many powerful tools for image labeling, there is still a need for RGB-Depth adapted tools. We present a new interactive labeling tool that partially automates image labeling, with two major contributions. First, the method extends the concept of image segmentatian from RGB to RGB-Depth. Second, it minimizes the interadion time needed for object extraction by using a highly efficient segmentatian method in RGB-Depth space. The ent~re procedure requires very few elieks campared to other already existing tools. In fact, when the desired object is the dosest object to the camera, as is the case in our forestry application, no click is required to extract the object. Finally, while we present state of the art in object detection with 2D environments, object detection using RGB-depth information is mainly addressed for future work.

Ort, förlag, år, upplaga, sidor
Umeå: Department of Computing Science, Umeå University, 2013. s. 62
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:umu:diva-71477 (URN)978-91-7459-657-1 (ISBN)978-91-7459-658-8 (ISBN)
Presentation
2013-05-06, MIT-building, MC313, Umeå University, Umeå, 13:00 (Engelska)
Handledare
Tillgänglig från: 2013-05-30 Skapad: 2013-05-30 Senast uppdaterad: 2018-06-08Bibliografiskt granskad
2.
Posten kunde inte hittas. Det kan bero på att posten inte längre är tillgänglig eller att du har råkat ange ett felaktigt id i adressfältet.

Open Access i DiVA

Fulltext saknas i DiVA

Personposter BETA

Pordel, MostafaHellström, ThomasOstovar, Ahmad

Sök vidare i DiVA

Av författaren/redaktören
Pordel, MostafaHellström, ThomasOstovar, Ahmad
Av organisationen
Institutionen för datavetenskap
Robotteknik och automation

Sök vidare utanför DiVA

GoogleGoogle Scholar

urn-nbn

Altmetricpoäng

urn-nbn
Totalt: 307 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf