umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Robot Learning and Reproduction of High-Level Behaviors
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Robotics)
2013 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Learning techniques are drawing extensive attention in the robotics community. Some reasons behind moving from traditional preprogrammed robots to more advanced human fashioned techniques are to save time and energy, and allow non-technical users to easily work with robots. Learning from Demonstration (LfD) and Imitation Learning (IL) are among the most popular learning techniques to teach robots new skills by observing a human or robot tutor.

Flawlessly teaching robots new skills by LfD requires good understanding of all challenges in the field. Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to correctly learn new skills. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions is essential for correctly and completely replicating the behavior with the same effects on the world. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the repertoire.

Considering identified main challenges, the current thesis attempts to model imitation learning in robots, mainly focusing on understanding the tutor's intentions and recognizing what elements of the demonstration need the robot's attention. Thereby, an architecture containing required cognitive functions for learning and reproducing high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills. This is further applied in learning object affordances, behavior arbitration and goal emulation.

The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of goals and what to look for in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability is specified.

Place, publisher, year, edition, pages
Umeå: Umeå Universitet , 2013. , 40 p.
Series
Report / UMINF, ISSN 0348-0542 ; 2013:20
National Category
Robotics
Identifiers
URN: urn:nbn:se:umu:diva-87258ISBN: 978-91-7459-712-7 (print)OAI: oai:DiVA.org:umu-87258DiVA: diva2:708128
Presentation
2013-09-06, Naturvetarhuset, N330, Umeå University, Umeå, 13:15 (English)
Opponent
Supervisors
Available from: 2014-03-27 Created: 2014-03-26 Last updated: 2014-04-01Bibliographically approved
List of papers
1. Learning High-Level Behaviors From Demonstration Through Semantic Networks
Open this publication in new window or tab >>Learning High-Level Behaviors From Demonstration Through Semantic Networks
2012 (English)In: Proceedings of 4th International Conference on Agents and Artificial Intelligence, 2012, 419-426 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present an approach for high-level behavior recognition and selection integrated with alow-level controller to help the robot to learn new skills from demonstrations. By means of SemanticNetwork as the core of the method, the robot gains the ability to model the world with concepts and relatethem to low-level sensory-motor states. We also show how the generalization ability of Semantic Networkscan be used to extend learned skills to new situations.

Keyword
Learning from Demonstration, High-Level Behaviors, Semantic Networks, Robot Learning
National Category
Robotics
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-52233 (URN)10.5220/0003834304190426 (DOI)000327208400054 ()978-989-8425-95-9 (ISBN)
Conference
4th International Conference on Agents and Artificial Intelligence (ICAART), 6-8 February 2012, Vilamoura, Algarve, Portugal
Projects
INTRO
Funder
EU, FP7, Seventh Framework Programme, 238486
Available from: 2012-02-20 Created: 2012-02-14 Last updated: 2017-01-19Bibliographically approved
2. Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration
Open this publication in new window or tab >>Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration
2013 (English)In: 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2013, 67-74 p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper gives a brief overview of challenges indesigning cognitive architectures for Learning fromDemonstration. By investigating features and functionality ofsome related architectures, we propose a modular architectureparticularly suited for sequential learning high-levelrepresentations of behaviors. We head towards designing andimplementing goal based imitation learning that not only allowsthe robot to learn necessary conditions for executing particularbehaviors, but also to understand the intents of the tutor andreproduce the same behaviors accordingly.

Keyword
Learning from Demonstration, Cognitive Architecture, Goal Based Imitation
National Category
Robotics
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-67930 (URN)10.1109/CogSIMA.2013.6523825 (DOI)000325568600010 ()978-1-4673-2437-3 (ISBN)
Conference
3rd IEEE Conference on Cognitive Methods in Situation Awareness and Decision Support, CogSIMA 2013, 25 February 2013 through 28 February 2013, San Diego, CA
Projects
INTRO
Funder
EU, FP7, Seventh Framework Programme, 238486
Available from: 2013-04-08 Created: 2013-04-08 Last updated: 2014-12-18Bibliographically approved
3. Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations
Open this publication in new window or tab >>Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations
2015 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 65, 24-39 p.Article in journal (Refereed) Published
Abstract [en]

In domains where robots carry out human’s tasks, the ability to learn new behaviors easily and quickly plays an important role. Two major challenges with Learning from Demonstration (LfD) are to identify what information in a demonstrated behavior requires attention by the robot, and to generalize the learned behavior such that the robot is able to perform the same behavior in novel situations. The main goal of this paper is to incorporate Ant Colony Optimization (ACO) algorithms into LfD in an approach that focuses on understanding tutor's intentions and learning conditions to exhibit a behavior. The proposed method combines ACO algorithms with semantic networks and spreading activation mechanism to reason and generalize the knowledge obtained through demonstrations. The approach also provides structures for behavior reproduction under new circumstances. Finally, applicability of the system in an object shape classification scenario is evaluated.

Place, publisher, year, edition, pages
Elsevier, 2015
Keyword
Learning from Demonstration, Semantic Networks, Ant Colony Optimization, High-Level Behavior Learning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-87257 (URN)10.1016/j.robot.2014.12.001 (DOI)000349724400003 ()
Projects
INTRO
Funder
EU, FP7, Seventh Framework Programme, 238486
Available from: 2014-03-26 Created: 2014-03-26 Last updated: 2017-12-05Bibliographically approved

Open Access in DiVA

Robot Learning and Reproduction of High-Level Behaviors(2051 kB)234 downloads
File information
File name FULLTEXT02.pdfFile size 2051 kBChecksum SHA-512
963e816a2e29c01421e3d1a3b1e63746a3b08f12a1eb87caa0ec902fd314cc37133dba41e7c872d0d282e57fd8544b63335c58bf0a6b815621187d64178762b1
Type fulltextMimetype application/pdf

Authority records BETA

Fonooni, Benjamin

Search in DiVA

By author/editor
Fonooni, Benjamin
By organisation
Department of Computing Science
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 234 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 222 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf