umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Cognitive Interactive Robot Learning
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Robotics)
2014 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Building general purpose autonomous robots that suit a wide range of user-specified applications, requires a leap from today's task-specific machines to more flexible and general ones. To achieve this goal, one should move from traditional preprogrammed robots to learning robots that easily can acquire new skills. Learning from Demonstration (LfD) and Imitation Learning (IL), in which the robot learns by observing a human or robot tutor, are among the most popular learning techniques. Showing the robot how to perform a task is often more natural and intuitive than figuring out how to modify a complex control program. However, teaching robots new skills such that they can reproduce the acquired skills under any circumstances, on the right time and in an appropriate way, require good understanding of all challenges in the field. Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to learn new skills correctly. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions and an ability to generalize to new situations are essential. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the robot's repertoire. The goal of this thesis is to develop methods for learning from demonstration that mainly focus on understanding the tutor's intentions, and recognizing which elements of a demonstration need the robot's attention. An architecture containing required cognitive functions for learning and reproduction of high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills. Another major contribution of this thesis is methods to resolve ambiguities in demonstrations where the tutor's intentions are not clearly expressed and several demonstrations are required to infer intentions correctly. The provided solution is inspired by human memory models and priming mechanisms that give the robot clues that increase the probability of inferring intentions correctly. In addition to robot learning, the developed techniques are applied to a shared control system based on visual servoing guided behaviors and priming mechanisms. The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of intentions in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed.

Abstract [sv]

Att bygga autonoma robotar som passar ett stort antal olika användardefinierade applikationer kräver ett språng från dagens specialiserade maskiner till mer flexibla lösningar. För att nå detta mål, bör man övergå från traditionella förprogrammerade robotar till robotar som själva kan lära sig nya färdigheter. Learning from Demonstration (LfD) och Imitation Learning (IL), där roboten lär sig genom att observera en människa eller en annan robot, är bland de mest populära inlärningsteknikerna. Att visa roboten hur den ska utföra en uppgift är ofta mer naturligt och intuitivt än att modifiera ett komplicerat styrprogram. Men att lära robotar nya färdigheter så att de kan reproducera dem under nya yttre förhållanden, på rätt tid och på ett lämpligt sätt, kräver god förståelse för alla utmaningar inom området. Studier av LfD och IL hos människor och djur visar att flera kognitiva förmågor är inblandade för att lära sig nya färdigheter på rätt sätt. De mest anmärkningsvärda är förmågan att rikta uppmärksamheten på de relevanta aspekterna i en demonstration, och förmågan att anpassa observerade rörelser till robotens egen kropp. Dessutom är det viktigt att ha en klar förståelse av lärarens avsikter, och att ha förmågan att kunna generalisera dem till nya situationer. När en inlärningsfas är slutförd kan stimuli trigga det kognitiva systemet att utföra de nya färdigheter som blivit en del av robotens repertoar. Målet med denna avhandling är att utveckla metoder för LfD som huvudsakligen fokuserar på att förstå lärarens intentioner, och vilka delar av en demonstration som ska ha robotens uppmärksamhet. Den föreslagna arkitekturen innehåller de kognitiva funktioner som behövs för lärande och återgivning av högnivåaspekter av demonstrationer. Flera inlärningsmetoder för att rikta robotens uppmärksamhet och identifiera relevant information föreslås. Arkitekturen integrerar motorkommandon med begrepp, föremål och omgivningens tillstånd för att säkerställa korrekt återgivning av beteenden. Ett annat huvudresultat i denna avhandling rör metoder för att lösa tvetydigheter i demonstrationer, där lärarens intentioner inte är klart uttryckta och flera demonstrationer är nödvändiga för att kunna förutsäga intentioner på ett korrekt sätt. De utvecklade lösningarna är inspirerade av modeller av människors minne, och en primingmekanism används för att ge roboten ledtrådar som kan öka sannolikheten för att intentioner förutsägs på ett korrekt sätt. De utvecklade teknikerna har, i tillägg till robotinlärning, använts i ett halvautomatiskt system (shared control) baserat på visuellt guidade beteenden och primingmekanismer. Arkitekturen och inlärningsteknikerna tillämpas och utvärderas i flera verkliga scenarion som kräver en tydlig förståelse av mänskliga intentioner i demonstrationerna. Slutligen jämförs de utvecklade inlärningsmetoderna, och deras applicerbarhet under olika förhållanden diskuteras.

Place, publisher, year, edition, pages
Umeå: Umeå University , 2014. , 54 p.
Series
Report / UMINF, ISSN 0348-0542 ; 14.23
Keyword [en]
Learning from Demonstration, Imitation Learning, Human Robot Interaction, High-Level Behavior Learning, Shared Control, Cognitive Architectures, Cognitive Robotics, Priming
National Category
Robotics
Research subject
Computing Science
Identifiers
URN: urn:nbn:se:umu:diva-97422ISBN: 978-91-7601-189-8 (print)OAI: oai:DiVA.org:umu-97422DiVA: diva2:772802
Public defence
2015-01-16, MA121, MIT-huset, Umeå, 13:30 (English)
Opponent
Supervisors
Projects
INTRO
Funder
EU, FP7, Seventh Framework Programme, 238486
Available from: 2014-12-19 Created: 2014-12-17 Last updated: 2014-12-19Bibliographically approved
List of papers
1. Learning High-Level Behaviors From Demonstration Through Semantic Networks
Open this publication in new window or tab >>Learning High-Level Behaviors From Demonstration Through Semantic Networks
2012 (English)In: Proceedings of 4th International Conference on Agents and Artificial Intelligence, 2012, 419-426 p.Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present an approach for high-level behavior recognition and selection integrated with alow-level controller to help the robot to learn new skills from demonstrations. By means of SemanticNetwork as the core of the method, the robot gains the ability to model the world with concepts and relatethem to low-level sensory-motor states. We also show how the generalization ability of Semantic Networkscan be used to extend learned skills to new situations.

Keyword
Learning from Demonstration, High-Level Behaviors, Semantic Networks, Robot Learning
National Category
Robotics
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-52233 (URN)10.5220/0003834304190426 (DOI)000327208400054 ()978-989-8425-95-9 (ISBN)
Conference
4th International Conference on Agents and Artificial Intelligence (ICAART), 6-8 February 2012, Vilamoura, Algarve, Portugal
Projects
INTRO
Funder
EU, FP7, Seventh Framework Programme, 238486
Available from: 2012-02-20 Created: 2012-02-14 Last updated: 2017-01-19Bibliographically approved
2. Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration
Open this publication in new window or tab >>Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration
2013 (English)In: 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2013, 67-74 p.Conference paper, Published paper (Refereed)
Abstract [en]

This paper gives a brief overview of challenges indesigning cognitive architectures for Learning fromDemonstration. By investigating features and functionality ofsome related architectures, we propose a modular architectureparticularly suited for sequential learning high-levelrepresentations of behaviors. We head towards designing andimplementing goal based imitation learning that not only allowsthe robot to learn necessary conditions for executing particularbehaviors, but also to understand the intents of the tutor andreproduce the same behaviors accordingly.

Keyword
Learning from Demonstration, Cognitive Architecture, Goal Based Imitation
National Category
Robotics
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-67930 (URN)10.1109/CogSIMA.2013.6523825 (DOI)000325568600010 ()978-1-4673-2437-3 (ISBN)
Conference
3rd IEEE Conference on Cognitive Methods in Situation Awareness and Decision Support, CogSIMA 2013, 25 February 2013 through 28 February 2013, San Diego, CA
Projects
INTRO
Funder
EU, FP7, Seventh Framework Programme, 238486
Available from: 2013-04-08 Created: 2013-04-08 Last updated: 2014-12-18Bibliographically approved
3. Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations
Open this publication in new window or tab >>Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations
2015 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 65, 24-39 p.Article in journal (Refereed) Published
Abstract [en]

In domains where robots carry out human’s tasks, the ability to learn new behaviors easily and quickly plays an important role. Two major challenges with Learning from Demonstration (LfD) are to identify what information in a demonstrated behavior requires attention by the robot, and to generalize the learned behavior such that the robot is able to perform the same behavior in novel situations. The main goal of this paper is to incorporate Ant Colony Optimization (ACO) algorithms into LfD in an approach that focuses on understanding tutor's intentions and learning conditions to exhibit a behavior. The proposed method combines ACO algorithms with semantic networks and spreading activation mechanism to reason and generalize the knowledge obtained through demonstrations. The approach also provides structures for behavior reproduction under new circumstances. Finally, applicability of the system in an object shape classification scenario is evaluated.

Place, publisher, year, edition, pages
Elsevier, 2015
Keyword
Learning from Demonstration, Semantic Networks, Ant Colony Optimization, High-Level Behavior Learning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-87257 (URN)10.1016/j.robot.2014.12.001 (DOI)000349724400003 ()
Projects
INTRO
Funder
EU, FP7, Seventh Framework Programme, 238486
Available from: 2014-03-26 Created: 2014-03-26 Last updated: 2017-12-05Bibliographically approved
4. Development of a search and rescue field robotic assistant
Open this publication in new window or tab >>Development of a search and rescue field robotic assistant
Show others...
2013 (English)In: 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), IEEE Xplore , 2013, 1-5 p.Conference paper, Published paper (Refereed)
Abstract [en]

The work introduced in this paper was performed as part of the FP7 INTRO (Marie-Curie ITN) project. We describe the activities undertaken towards the development of a field robotic assistant for a Search and Rescue application. We specifically target a rubble clearing task, where the robot will ferry small pieces of rubble between two waypoints assigned to it by the human. The aim is to complement a human worker with a robotic assistant for this task, while maintaining a comparable level of speed and efficiency in the task execution. Towards this end we develop/integrate software capabilities in mobile navigation, arm manipulation and high level tasks sequences learning. Early outdoor experiments carried out in a quarry are furthermore introduced.

Place, publisher, year, edition, pages
IEEE Xplore, 2013
Keyword
arm manipulation, high level task sequence learning, human worker, mobile navigation, rubble clearing task, search and rescue application, search and rescue field robotic assistant, software capabilities, task execution
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-87439 (URN)10.1109/SSRR.2013.6719357 (DOI)978-1-4799-0879-0 (ISBN)1479908797 (ISBN)
Conference
SSRR 2013 - IEEE International Symposium on Safety, Security, and Rescue Robotics, Linköping, October 21-26, 2013.
Projects
INTRO
Available from: 2014-04-01 Created: 2014-04-01 Last updated: 2015-08-04Bibliographically approved
5. Priming as a means to reduce ambiguity in learning from demonstration
Open this publication in new window or tab >>Priming as a means to reduce ambiguity in learning from demonstration
2016 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 8, no 1, 5-19 p.Article in journal (Refereed) Published
Abstract [en]

Learning from Demonstration (LfD) is an established robot learning technique by which a robot acquires a skill by observing a human or robot teacher demonstrating the skill. In this paper we address the ambiguity involved in inferring the intention with one or several demonstrations. We suggest a method based on priming, and a memory model with similarities to human learning. Conducted experiments show that the developed method leads to faster and improved understanding of the intention with a demonstration by reducing ambiguity.

Place, publisher, year, edition, pages
Dordrecht: Springer, 2016
Keyword
Learning from Demonstration, Priming, Ant System, Semantic Networks, Ambiguity, Behavior Learning
National Category
Robotics
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-97076 (URN)10.1007/s12369-015-0292-0 (DOI)000369276200002 ()
Available from: 2014-12-10 Created: 2014-12-10 Last updated: 2017-12-05Bibliographically approved
6. On the Similarities Between Control Based and Behavior Based Visual Servoing
Open this publication in new window or tab >>On the Similarities Between Control Based and Behavior Based Visual Servoing
2015 (English)In: Proceedings of the 30th Annual ACM Symposium on Applied Computing, New York: Association for Computing Machinery (ACM), 2015, 320-326 p.Conference paper, Published paper (Refereed)
Abstract [en]

Abstract Robotics is tightly connected to both artificial intelligence (AI) and control theory. Both AI and control based robotics are active and successful research areas, but research is often conducted by well separated communities. In this paper, we compare the two approaches in a case study for the design of a robot that should move its arm towards an object with the help of camera data. The control based approach is a model-free version of Image Based Visual Servoing (IBVS), which is based on mathematical modeling of the sensing and motion task. The AI approach, here denoted Behavior-Based Visual Servoing (BBVS), contains elements that are biologically plausible and inspired by schema-theory. We show how the two approaches lead to very similar solutions, even identical given a few simplifying assumptions. This similarity is shown both analytically and numerically. However, in a simple picking task with a 3 DoF robot arm, BBVS shows significantly higher performance than the IBVS approach, partly because it contains more manually tuned parameters. While the results obviously do not apply to all tasks and solutions, it illustrates both strengths and weaknesses with both approaches, and how they are tightly connected and share many similarities despite very different starting points and methodologies.

Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2015
Keyword
Behavior Based Visual Servoing, Image Based Visual Servoing, Behavior Based Systems
National Category
Robotics
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-97075 (URN)10.1145/2695664.2695949 (DOI)000381029800050 ()978-1-4503-3196-8 (ISBN)
Conference
30th ACM/SIGAPP Symposium on Applied Computing (SAC), Salmanca, Spain, Apr 13-17, 2015.
Available from: 2014-12-10 Created: 2014-12-10 Last updated: 2016-11-30Bibliographically approved
7. Applying a Priming Mechanism for Intention Recognition in Shared Control
Open this publication in new window or tab >>Applying a Priming Mechanism for Intention Recognition in Shared Control
2015 (English)In: 2015 IEEE INTERNATIONAL MULTI-DISCIPLINARY CONFERENCE ON COGNITIVE METHODS IN SITUATION AWARENESS AND DECISION SUPPORT (COGSIMA), 2015, 35-41 p.Conference paper, Published paper (Refereed)
Abstract [en]

In many robotics shared control applications, users are forced to focus hard on the robot due to the task’s high sensitivity or the robot’s misunderstanding of the user’s intention. This brings frustration and dissatisfaction to the user and reduces overall efficiency. The user’s intention is sometimes unclear and hard to identify without some kind of bias in the identification process. In this paper, we present a solution in which an attentional mechanism helps the robot to recognize the user’s intention. The solution uses a priming mechanism and parameterized behavior primitives to support intention recognition and improve shared control for teleoperation tasks.

Keyword
Shared Control, Priming, Semantic Networks, Intention Recognition
National Category
Robotics
Research subject
Computing Science
Identifiers
urn:nbn:se:umu:diva-97074 (URN)000380447000006 ()978-1-4799-8015-4 (ISBN)
Conference
5th IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), Orlando, FL, Mars 9-12, 2015.
Available from: 2014-12-10 Created: 2014-12-10 Last updated: 2016-09-30Bibliographically approved

Open Access in DiVA

fulltext(8182 kB)264 downloads
File information
File name FULLTEXT01.pdfFile size 8182 kBChecksum SHA-512
f61ffc128b5da0349d1c8f10d339201da11a4460344dd4ac819bf1f88f97b708fa568bb02033539efb6615e2ec3577007ac9055f3e9aaa32000a78418c553f3b
Type fulltextMimetype application/pdf
spikblad(16 kB)8 downloads
File information
File name SPIKBLAD01.pdfFile size 16 kBChecksum SHA-512
4044de3c48f0b386942859529b9f0bc1218095aba31eb0cb02c8e34627f5dc36991950083186222a30b9ee7925d4998518c97a0be3b38c1b9c5fa17d06b9dd62
Type spikbladMimetype application/pdf

Authority records BETA

Fonooni, Benjamin

Search in DiVA

By author/editor
Fonooni, Benjamin
By organisation
Department of Computing Science
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 264 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 907 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf