umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Robotics)
(Robotics)
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Robotics)
Umeå University, Faculty of Science and Technology, Department of Computing Science.
2015 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 65, 24-39 p.Article in journal (Refereed) Published
Abstract [en]

In domains where robots carry out human’s tasks, the ability to learn new behaviors easily and quickly plays an important role. Two major challenges with Learning from Demonstration (LfD) are to identify what information in a demonstrated behavior requires attention by the robot, and to generalize the learned behavior such that the robot is able to perform the same behavior in novel situations. The main goal of this paper is to incorporate Ant Colony Optimization (ACO) algorithms into LfD in an approach that focuses on understanding tutor's intentions and learning conditions to exhibit a behavior. The proposed method combines ACO algorithms with semantic networks and spreading activation mechanism to reason and generalize the knowledge obtained through demonstrations. The approach also provides structures for behavior reproduction under new circumstances. Finally, applicability of the system in an object shape classification scenario is evaluated.

Place, publisher, year, edition, pages
Elsevier, 2015. Vol. 65, 24-39 p.
Keyword [en]
Learning from Demonstration, Semantic Networks, Ant Colony Optimization, High-Level Behavior Learning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:umu:diva-87257DOI: 10.1016/j.robot.2014.12.001ISI: 000349724400003OAI: oai:DiVA.org:umu-87257DiVA: diva2:708069
Projects
INTRO
Funder
EU, FP7, Seventh Framework Programme, 238486
Available from: 2014-03-26 Created: 2014-03-26 Last updated: 2017-12-05Bibliographically approved
In thesis
1. Robot Learning and Reproduction of High-Level Behaviors
Open this publication in new window or tab >>Robot Learning and Reproduction of High-Level Behaviors
2013 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Learning techniques are drawing extensive attention in the robotics community. Some reasons behind moving from traditional preprogrammed robots to more advanced human fashioned techniques are to save time and energy, and allow non-technical users to easily work with robots. Learning from Demonstration (LfD) and Imitation Learning (IL) are among the most popular learning techniques to teach robots new skills by observing a human or robot tutor.

Flawlessly teaching robots new skills by LfD requires good understanding of all challenges in the field. Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to correctly learn new skills. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions is essential for correctly and completely replicating the behavior with the same effects on the world. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the repertoire.

Considering identified main challenges, the current thesis attempts to model imitation learning in robots, mainly focusing on understanding the tutor's intentions and recognizing what elements of the demonstration need the robot's attention. Thereby, an architecture containing required cognitive functions for learning and reproducing high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills. This is further applied in learning object affordances, behavior arbitration and goal emulation.

The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of goals and what to look for in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability is specified.

Place, publisher, year, edition, pages
Umeå: Umeå Universitet, 2013. 40 p.
Series
Report / UMINF, ISSN 0348-0542 ; 2013:20
National Category
Robotics
Identifiers
urn:nbn:se:umu:diva-87258 (URN)978-91-7459-712-7 (ISBN)
Presentation
2013-09-06, Naturvetarhuset, N330, Umeå University, Umeå, 13:15 (English)
Opponent
Supervisors
Available from: 2014-03-27 Created: 2014-03-26 Last updated: 2014-04-01Bibliographically approved
2. Cognitive Interactive Robot Learning
Open this publication in new window or tab >>Cognitive Interactive Robot Learning
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Building general purpose autonomous robots that suit a wide range of user-specified applications, requires a leap from today's task-specific machines to more flexible and general ones. To achieve this goal, one should move from traditional preprogrammed robots to learning robots that easily can acquire new skills. Learning from Demonstration (LfD) and Imitation Learning (IL), in which the robot learns by observing a human or robot tutor, are among the most popular learning techniques. Showing the robot how to perform a task is often more natural and intuitive than figuring out how to modify a complex control program. However, teaching robots new skills such that they can reproduce the acquired skills under any circumstances, on the right time and in an appropriate way, require good understanding of all challenges in the field. Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to learn new skills correctly. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions and an ability to generalize to new situations are essential. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the robot's repertoire. The goal of this thesis is to develop methods for learning from demonstration that mainly focus on understanding the tutor's intentions, and recognizing which elements of a demonstration need the robot's attention. An architecture containing required cognitive functions for learning and reproduction of high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills. Another major contribution of this thesis is methods to resolve ambiguities in demonstrations where the tutor's intentions are not clearly expressed and several demonstrations are required to infer intentions correctly. The provided solution is inspired by human memory models and priming mechanisms that give the robot clues that increase the probability of inferring intentions correctly. In addition to robot learning, the developed techniques are applied to a shared control system based on visual servoing guided behaviors and priming mechanisms. The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of intentions in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed.

Abstract [sv]

Att bygga autonoma robotar som passar ett stort antal olika användardefinierade applikationer kräver ett språng från dagens specialiserade maskiner till mer flexibla lösningar. För att nå detta mål, bör man övergå från traditionella förprogrammerade robotar till robotar som själva kan lära sig nya färdigheter. Learning from Demonstration (LfD) och Imitation Learning (IL), där roboten lär sig genom att observera en människa eller en annan robot, är bland de mest populära inlärningsteknikerna. Att visa roboten hur den ska utföra en uppgift är ofta mer naturligt och intuitivt än att modifiera ett komplicerat styrprogram. Men att lära robotar nya färdigheter så att de kan reproducera dem under nya yttre förhållanden, på rätt tid och på ett lämpligt sätt, kräver god förståelse för alla utmaningar inom området. Studier av LfD och IL hos människor och djur visar att flera kognitiva förmågor är inblandade för att lära sig nya färdigheter på rätt sätt. De mest anmärkningsvärda är förmågan att rikta uppmärksamheten på de relevanta aspekterna i en demonstration, och förmågan att anpassa observerade rörelser till robotens egen kropp. Dessutom är det viktigt att ha en klar förståelse av lärarens avsikter, och att ha förmågan att kunna generalisera dem till nya situationer. När en inlärningsfas är slutförd kan stimuli trigga det kognitiva systemet att utföra de nya färdigheter som blivit en del av robotens repertoar. Målet med denna avhandling är att utveckla metoder för LfD som huvudsakligen fokuserar på att förstå lärarens intentioner, och vilka delar av en demonstration som ska ha robotens uppmärksamhet. Den föreslagna arkitekturen innehåller de kognitiva funktioner som behövs för lärande och återgivning av högnivåaspekter av demonstrationer. Flera inlärningsmetoder för att rikta robotens uppmärksamhet och identifiera relevant information föreslås. Arkitekturen integrerar motorkommandon med begrepp, föremål och omgivningens tillstånd för att säkerställa korrekt återgivning av beteenden. Ett annat huvudresultat i denna avhandling rör metoder för att lösa tvetydigheter i demonstrationer, där lärarens intentioner inte är klart uttryckta och flera demonstrationer är nödvändiga för att kunna förutsäga intentioner på ett korrekt sätt. De utvecklade lösningarna är inspirerade av modeller av människors minne, och en primingmekanism används för att ge roboten ledtrådar som kan öka sannolikheten för att intentioner förutsägs på ett korrekt sätt. De utvecklade teknikerna har, i tillägg till robotinlärning, använts i ett halvautomatiskt system (shared control) baserat på visuellt guidade beteenden och primingmekanismer. Arkitekturen och inlärningsteknikerna tillämpas och utvärderas i flera verkliga scenarion som kräver en tydlig förståelse av mänskliga intentioner i demonstrationerna. Slutligen jämförs de utvecklade inlärningsmetoderna, och deras applicerbarhet under olika förhållanden diskuteras.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2014. 54 p.
Series
Report / UMINF, ISSN 0348-0542 ; 14.23
Keyword
Learning from Demonstration, Imitation Learning, Human Robot Interaction, High-Level Behavior Learning, Shared Control, Cognitive Architectures, Cognitive Robotics, Priming
National Category
Robotics
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-97422 (URN)978-91-7601-189-8 (ISBN)
Public defence
2015-01-16, MA121, MIT-huset, Umeå, 13:30 (English)
Opponent
Supervisors
Projects
INTRO
Funder
EU, FP7, Seventh Framework Programme, 238486
Available from: 2014-12-19 Created: 2014-12-17 Last updated: 2014-12-19Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Fonooni, BenjaminHellström, ThomasJanlert, Lars-Erik
By organisation
Department of Computing Science
In the same journal
Robotics and Autonomous Systems
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 275 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf