Umeå universitets logga

umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Cognitive Interactive Robot Learning
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. (Robotics)
2014 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Building general purpose autonomous robots that suit a wide range of user-specified applications, requires a leap from today's task-specific machines to more flexible and general ones. To achieve this goal, one should move from traditional preprogrammed robots to learning robots that easily can acquire new skills. Learning from Demonstration (LfD) and Imitation Learning (IL), in which the robot learns by observing a human or robot tutor, are among the most popular learning techniques. Showing the robot how to perform a task is often more natural and intuitive than figuring out how to modify a complex control program. However, teaching robots new skills such that they can reproduce the acquired skills under any circumstances, on the right time and in an appropriate way, require good understanding of all challenges in the field. Studies of imitation learning in humans and animals show that several cognitive abilities are engaged to learn new skills correctly. The most remarkable ones are the ability to direct attention to important aspects of demonstrations, and adapting observed actions to the agents own body. Moreover, a clear understanding of the demonstrator's intentions and an ability to generalize to new situations are essential. Once learning is accomplished, various stimuli may trigger the cognitive system to execute new skills that have become part of the robot's repertoire. The goal of this thesis is to develop methods for learning from demonstration that mainly focus on understanding the tutor's intentions, and recognizing which elements of a demonstration need the robot's attention. An architecture containing required cognitive functions for learning and reproduction of high-level aspects of demonstrations is proposed. Several learning methods for directing the robot's attention and identifying relevant information are introduced. The architecture integrates motor actions with concepts, objects and environmental states to ensure correct reproduction of skills. Another major contribution of this thesis is methods to resolve ambiguities in demonstrations where the tutor's intentions are not clearly expressed and several demonstrations are required to infer intentions correctly. The provided solution is inspired by human memory models and priming mechanisms that give the robot clues that increase the probability of inferring intentions correctly. In addition to robot learning, the developed techniques are applied to a shared control system based on visual servoing guided behaviors and priming mechanisms. The architecture and learning methods are applied and evaluated in several real world scenarios that require clear understanding of intentions in the demonstrations. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed.

Abstract [sv]

Att bygga autonoma robotar som passar ett stort antal olika användardefinierade applikationer kräver ett språng från dagens specialiserade maskiner till mer flexibla lösningar. För att nå detta mål, bör man övergå från traditionella förprogrammerade robotar till robotar som själva kan lära sig nya färdigheter. Learning from Demonstration (LfD) och Imitation Learning (IL), där roboten lär sig genom att observera en människa eller en annan robot, är bland de mest populära inlärningsteknikerna. Att visa roboten hur den ska utföra en uppgift är ofta mer naturligt och intuitivt än att modifiera ett komplicerat styrprogram. Men att lära robotar nya färdigheter så att de kan reproducera dem under nya yttre förhållanden, på rätt tid och på ett lämpligt sätt, kräver god förståelse för alla utmaningar inom området. Studier av LfD och IL hos människor och djur visar att flera kognitiva förmågor är inblandade för att lära sig nya färdigheter på rätt sätt. De mest anmärkningsvärda är förmågan att rikta uppmärksamheten på de relevanta aspekterna i en demonstration, och förmågan att anpassa observerade rörelser till robotens egen kropp. Dessutom är det viktigt att ha en klar förståelse av lärarens avsikter, och att ha förmågan att kunna generalisera dem till nya situationer. När en inlärningsfas är slutförd kan stimuli trigga det kognitiva systemet att utföra de nya färdigheter som blivit en del av robotens repertoar. Målet med denna avhandling är att utveckla metoder för LfD som huvudsakligen fokuserar på att förstå lärarens intentioner, och vilka delar av en demonstration som ska ha robotens uppmärksamhet. Den föreslagna arkitekturen innehåller de kognitiva funktioner som behövs för lärande och återgivning av högnivåaspekter av demonstrationer. Flera inlärningsmetoder för att rikta robotens uppmärksamhet och identifiera relevant information föreslås. Arkitekturen integrerar motorkommandon med begrepp, föremål och omgivningens tillstånd för att säkerställa korrekt återgivning av beteenden. Ett annat huvudresultat i denna avhandling rör metoder för att lösa tvetydigheter i demonstrationer, där lärarens intentioner inte är klart uttryckta och flera demonstrationer är nödvändiga för att kunna förutsäga intentioner på ett korrekt sätt. De utvecklade lösningarna är inspirerade av modeller av människors minne, och en primingmekanism används för att ge roboten ledtrådar som kan öka sannolikheten för att intentioner förutsägs på ett korrekt sätt. De utvecklade teknikerna har, i tillägg till robotinlärning, använts i ett halvautomatiskt system (shared control) baserat på visuellt guidade beteenden och primingmekanismer. Arkitekturen och inlärningsteknikerna tillämpas och utvärderas i flera verkliga scenarion som kräver en tydlig förståelse av mänskliga intentioner i demonstrationerna. Slutligen jämförs de utvecklade inlärningsmetoderna, och deras applicerbarhet under olika förhållanden diskuteras.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå University , 2014. , s. 54
Serie
Report / UMINF, ISSN 0348-0542 ; 14.23
Nyckelord [en]
Learning from Demonstration, Imitation Learning, Human Robot Interaction, High-Level Behavior Learning, Shared Control, Cognitive Architectures, Cognitive Robotics, Priming
Nationell ämneskategori
Robotteknik och automation
Forskningsämne
administrativ databehandling
Identifikatorer
URN: urn:nbn:se:umu:diva-97422ISBN: 978-91-7601-189-8 (tryckt)OAI: oai:DiVA.org:umu-97422DiVA, id: diva2:772802
Disputation
2015-01-16, MA121, MIT-huset, Umeå, 13:30 (Engelska)
Opponent
Handledare
Projekt
INTRO
Forskningsfinansiär
EU, FP7, Sjunde ramprogrammet, 238486Tillgänglig från: 2014-12-19 Skapad: 2014-12-17 Senast uppdaterad: 2018-06-07Bibliografiskt granskad
Delarbeten
1. Learning High-Level Behaviors From Demonstration Through Semantic Networks
Öppna denna publikation i ny flik eller fönster >>Learning High-Level Behaviors From Demonstration Through Semantic Networks
2012 (Engelska)Ingår i: Proceedings of 4th International Conference on Agents and Artificial Intelligence, 2012, s. 419-426Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this paper we present an approach for high-level behavior recognition and selection integrated with alow-level controller to help the robot to learn new skills from demonstrations. By means of SemanticNetwork as the core of the method, the robot gains the ability to model the world with concepts and relatethem to low-level sensory-motor states. We also show how the generalization ability of Semantic Networkscan be used to extend learned skills to new situations.

Nyckelord
Learning from Demonstration, High-Level Behaviors, Semantic Networks, Robot Learning
Nationell ämneskategori
Robotteknik och automation
Forskningsämne
administrativ databehandling
Identifikatorer
urn:nbn:se:umu:diva-52233 (URN)10.5220/0003834304190426 (DOI)000327208400054 ()2-s2.0-84862136751 (Scopus ID)978-989-8425-95-9 (ISBN)
Konferens
4th International Conference on Agents and Artificial Intelligence (ICAART), 6-8 February 2012, Vilamoura, Algarve, Portugal
Projekt
INTRO
Forskningsfinansiär
EU, FP7, Sjunde ramprogrammet, 238486
Tillgänglig från: 2012-02-20 Skapad: 2012-02-14 Senast uppdaterad: 2023-07-31Bibliografiskt granskad
2. Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration
Öppna denna publikation i ny flik eller fönster >>Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration
2013 (Engelska)Ingår i: 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2013, s. 67-74Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This paper gives a brief overview of challenges indesigning cognitive architectures for Learning fromDemonstration. By investigating features and functionality ofsome related architectures, we propose a modular architectureparticularly suited for sequential learning high-levelrepresentations of behaviors. We head towards designing andimplementing goal based imitation learning that not only allowsthe robot to learn necessary conditions for executing particularbehaviors, but also to understand the intents of the tutor andreproduce the same behaviors accordingly.

Nyckelord
Learning from Demonstration, Cognitive Architecture, Goal Based Imitation
Nationell ämneskategori
Robotteknik och automation
Forskningsämne
administrativ databehandling
Identifikatorer
urn:nbn:se:umu:diva-67930 (URN)10.1109/CogSIMA.2013.6523825 (DOI)000325568600010 ()2-s2.0-84879762460 (Scopus ID)978-1-4673-2437-3 (ISBN)
Konferens
3rd IEEE Conference on Cognitive Methods in Situation Awareness and Decision Support, CogSIMA 2013, 25 February 2013 through 28 February 2013, San Diego, CA
Projekt
INTRO
Forskningsfinansiär
EU, FP7, Sjunde ramprogrammet, 238486
Tillgänglig från: 2013-04-08 Skapad: 2013-04-08 Senast uppdaterad: 2023-03-23Bibliografiskt granskad
3. Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations
Öppna denna publikation i ny flik eller fönster >>Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations
2015 (Engelska)Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 65, s. 24-39Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

In domains where robots carry out human’s tasks, the ability to learn new behaviors easily and quickly plays an important role. Two major challenges with Learning from Demonstration (LfD) are to identify what information in a demonstrated behavior requires attention by the robot, and to generalize the learned behavior such that the robot is able to perform the same behavior in novel situations. The main goal of this paper is to incorporate Ant Colony Optimization (ACO) algorithms into LfD in an approach that focuses on understanding tutor's intentions and learning conditions to exhibit a behavior. The proposed method combines ACO algorithms with semantic networks and spreading activation mechanism to reason and generalize the knowledge obtained through demonstrations. The approach also provides structures for behavior reproduction under new circumstances. Finally, applicability of the system in an object shape classification scenario is evaluated.

Ort, förlag, år, upplaga, sidor
Elsevier, 2015
Nyckelord
Learning from Demonstration, Semantic Networks, Ant Colony Optimization, High-Level Behavior Learning
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:umu:diva-87257 (URN)10.1016/j.robot.2014.12.001 (DOI)000349724400003 ()2-s2.0-84921434684 (Scopus ID)
Projekt
INTRO
Forskningsfinansiär
EU, FP7, Sjunde ramprogrammet, 238486
Tillgänglig från: 2014-03-26 Skapad: 2014-03-26 Senast uppdaterad: 2023-03-23Bibliografiskt granskad
4. Development of a Search and Rescue field robotic assistant
Öppna denna publikation i ny flik eller fönster >>Development of a Search and Rescue field robotic assistant
Visa övriga...
2013 (Engelska)Ingår i: 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) / [ed] IEEE, IEEE, 2013Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

The work introduced in this paper was performed as part of the FP7 INTRO (Marie-Curie ITN) project. We describe the activities undertaken towards the development of a field robotic assistant for a Search and Rescue application. We specifically target a rubble clearing task, where the robot will ferry small pieces of rubble between two waypoints assigned to it by the human. The aim is to complement a human worker with a robotic assistant for this task, while maintaining a comparable level of speed and efficiency in the task execution. Towards this end we develop/integrate software capabilities in mobile navigation, arm manipulation and high level tasks sequences learning. Early outdoor experiments carried out in a quarry are furthermore introduced.

Ort, förlag, år, upplaga, sidor
IEEE, 2013
Serie
IEEE International Symposium on Safety Security and Rescue Robots, ISSN 2374-3247
Nyckelord
arm manipulation, high level task sequence learning, human worker, mobile navigation, rubble clearing task, search and rescue application, search and rescue field robotic assistant, software capabilities, task execution
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:umu:diva-87439 (URN)10.1109/SSRR.2013.6719357 (DOI)000350163600042 ()2-s2.0-84894171718 (Scopus ID)978-1-4799-0880-6 (ISBN)978-1-4799-0879-0 (ISBN)
Konferens
IEEE International Symposium on Safety, Security, and Rescue Robotics, SSRR 2013, Linköping, Sweden, 21-26 October, 2013.
Projekt
INTRO
Tillgänglig från: 2014-04-01 Skapad: 2014-04-01 Senast uppdaterad: 2023-03-24Bibliografiskt granskad
5. Priming as a means to reduce ambiguity in learning from demonstration
Öppna denna publikation i ny flik eller fönster >>Priming as a means to reduce ambiguity in learning from demonstration
2016 (Engelska)Ingår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 8, nr 1, s. 5-19Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Learning from Demonstration (LfD) is an established robot learning technique by which a robot acquires a skill by observing a human or robot teacher demonstrating the skill. In this paper we address the ambiguity involved in inferring the intention with one or several demonstrations. We suggest a method based on priming, and a memory model with similarities to human learning. Conducted experiments show that the developed method leads to faster and improved understanding of the intention with a demonstration by reducing ambiguity.

Ort, förlag, år, upplaga, sidor
Dordrecht: Springer, 2016
Nyckelord
Learning from Demonstration, Priming, Ant System, Semantic Networks, Ambiguity, Behavior Learning
Nationell ämneskategori
Robotteknik och automation
Forskningsämne
administrativ databehandling
Identifikatorer
urn:nbn:se:umu:diva-97076 (URN)10.1007/s12369-015-0292-0 (DOI)000369276200002 ()2-s2.0-84957557956 (Scopus ID)
Tillgänglig från: 2014-12-10 Skapad: 2014-12-10 Senast uppdaterad: 2023-03-24Bibliografiskt granskad
6. On the Similarities Between Control Based and Behavior Based Visual Servoing
Öppna denna publikation i ny flik eller fönster >>On the Similarities Between Control Based and Behavior Based Visual Servoing
2015 (Engelska)Ingår i: Proceedings of the 30th Annual ACM Symposium on Applied Computing, New York: Association for Computing Machinery (ACM), 2015, s. 320-326Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Abstract Robotics is tightly connected to both artificial intelligence (AI) and control theory. Both AI and control based robotics are active and successful research areas, but research is often conducted by well separated communities. In this paper, we compare the two approaches in a case study for the design of a robot that should move its arm towards an object with the help of camera data. The control based approach is a model-free version of Image Based Visual Servoing (IBVS), which is based on mathematical modeling of the sensing and motion task. The AI approach, here denoted Behavior-Based Visual Servoing (BBVS), contains elements that are biologically plausible and inspired by schema-theory. We show how the two approaches lead to very similar solutions, even identical given a few simplifying assumptions. This similarity is shown both analytically and numerically. However, in a simple picking task with a 3 DoF robot arm, BBVS shows significantly higher performance than the IBVS approach, partly because it contains more manually tuned parameters. While the results obviously do not apply to all tasks and solutions, it illustrates both strengths and weaknesses with both approaches, and how they are tightly connected and share many similarities despite very different starting points and methodologies.

Ort, förlag, år, upplaga, sidor
New York: Association for Computing Machinery (ACM), 2015
Nyckelord
Behavior Based Visual Servoing, Image Based Visual Servoing, Behavior Based Systems
Nationell ämneskategori
Robotteknik och automation
Forskningsämne
administrativ databehandling
Identifikatorer
urn:nbn:se:umu:diva-97075 (URN)10.1145/2695664.2695949 (DOI)000381029800050 ()2-s2.0-84955501870 (Scopus ID)978-1-4503-3196-8 (ISBN)
Konferens
30th ACM/SIGAPP Symposium on Applied Computing (SAC), Salmanca, Spain, Apr 13-17, 2015.
Tillgänglig från: 2014-12-10 Skapad: 2014-12-10 Senast uppdaterad: 2023-03-23Bibliografiskt granskad
7. Applying a Priming Mechanism for Intention Recognition in Shared Control
Öppna denna publikation i ny flik eller fönster >>Applying a Priming Mechanism for Intention Recognition in Shared Control
2015 (Engelska)Ingår i: 2015 IEEE INTERNATIONAL MULTI-DISCIPLINARY CONFERENCE ON COGNITIVE METHODS IN SITUATION AWARENESS AND DECISION SUPPORT (COGSIMA), 2015, s. 35-41Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In many robotics shared control applications, users are forced to focus hard on the robot due to the task’s high sensitivity or the robot’s misunderstanding of the user’s intention. This brings frustration and dissatisfaction to the user and reduces overall efficiency. The user’s intention is sometimes unclear and hard to identify without some kind of bias in the identification process. In this paper, we present a solution in which an attentional mechanism helps the robot to recognize the user’s intention. The solution uses a priming mechanism and parameterized behavior primitives to support intention recognition and improve shared control for teleoperation tasks.

Nyckelord
Shared Control, Priming, Semantic Networks, Intention Recognition
Nationell ämneskategori
Robotteknik och automation
Forskningsämne
administrativ databehandling
Identifikatorer
urn:nbn:se:umu:diva-97074 (URN)000380447000006 ()978-1-4799-8015-4 (ISBN)
Konferens
5th IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), Orlando, FL, Mars 9-12, 2015.
Tillgänglig från: 2014-12-10 Skapad: 2014-12-10 Senast uppdaterad: 2018-06-07Bibliografiskt granskad

Open Access i DiVA

fulltext(8182 kB)787 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 8182 kBChecksumma SHA-512
f61ffc128b5da0349d1c8f10d339201da11a4460344dd4ac819bf1f88f97b708fa568bb02033539efb6615e2ec3577007ac9055f3e9aaa32000a78418c553f3b
Typ fulltextMimetyp application/pdf
spikblad(16 kB)78 nedladdningar
Filinformation
Filnamn SPIKBLAD01.pdfFilstorlek 16 kBChecksumma SHA-512
4044de3c48f0b386942859529b9f0bc1218095aba31eb0cb02c8e34627f5dc36991950083186222a30b9ee7925d4998518c97a0be3b38c1b9c5fa17d06b9dd62
Typ spikbladMimetyp application/pdf

Person

Fonooni, Benjamin

Sök vidare i DiVA

Av författaren/redaktören
Fonooni, Benjamin
Av organisationen
Institutionen för datavetenskap
Robotteknik och automation

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 791 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

isbn
urn-nbn

Altmetricpoäng

isbn
urn-nbn
Totalt: 1346 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf