Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Informative communication of robot plans
Umeå University, Faculty of Science and Technology, Department of Computing Science.
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0001-7242-2200
2022 (English)In: Advances in practical applications of agents, multi-agent systems, and complex systems simulation: the PAAMS collection / [ed] Frank Dignum; Philippe Mathieu; Juan Manuel Corchado; Fernando De La Prieta, Springer, 2022, p. 332-344Conference paper, Published paper (Other academic)
Abstract [en]

When a robot is asked to verbalize its plan it can do it in many ways. For example, a seemingly natural strategy is incremental, where the robot verbalizes its planned actions in plan order. However, an important aspect of this type of strategy is that it misses considerations on what is effectively informative to communicate, because not considering what the user knows prior to explanations. In this paper we propose a verbalization strategy to communicate robot plans informatively, by measuring the information gain that verbalizations have against a second-order theory of mind of the user capturing his prior knowledge on the robot. As shown in our experiments, this strategy allows to understand the robot's goal much quicker than by using strategies such as increasing or decreasing plan order. In addition, following our formulation we hint to what is informative and why when a robot communicates its plan.

Place, publisher, year, edition, pages
Springer, 2022. p. 332-344
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13616
Keywords [en]
Bayesian network, Human-robot interaction, Mirror agent model, Plan verbalization
National Category
Robotics
Identifiers
URN: urn:nbn:se:umu:diva-192809DOI: 10.1007/978-3-031-18192-4_27Scopus ID: 2-s2.0-85141818392ISBN: 9783031181917 (print)ISBN: 9783031181924 (electronic)OAI: oai:DiVA.org:umu-192809DiVA, id: diva2:1641037
Conference
20th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2022), L'Aquila (Italy), 13-15 July, 2022
Note

Part of the book sub series: Lecture Notes in Artificial Intelligence (LNAI) 

Available from: 2022-02-28 Created: 2022-02-28 Last updated: 2022-12-15Bibliographically approved
In thesis
1. Expressing and recognizing intentions
Open this publication in new window or tab >>Expressing and recognizing intentions
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Uttrycka och känna igen avsikter
Abstract [en]

With the advancement of Artificial Intelligence, intelligent computer programs known as agents are coming increasingly close to the life of human beings. In an optimistic view, predictions tell us that agents will be governing many machines such as phones, robots and cars. Agents will play an important role in our daily life, and because of this, it is becoming more and more relevant to provide them the capacity of interacting with their human users without requiring prior expert training, bur rather supporting understanding through common sense reasoning. Striving towards this objective, one important aspect of agents design is intentionality, that relates to their capacity of understanding goals and plans of their users, and to make theirs understood. The ability to reason on goals and objectives of themselves and others is especially important if the agents are autonomous such as in autonomous robots, because enabling them to interact with other agents by relying on their sensor data and internal computations only, and not on explicit data provided by their designer. 

Intentionality imbues agents with additional capacities supporting cooperative activities: if a helpful agent recognize an intention, it could proactively actuate interventions in an helpful way, such as giving advice. Alternatively, whenever the agent detects that its user is not understanding its objective it could communicate the mismatching information. As an illustrative example, let's consider the case of an autonomous car with a passenger on the road to his office. After checking the online maps the car identifies a traffic jam ahead, and to avoid it, autonomously decides to change its path mid-way by taking a less populated road. The behavior of the car is quite intelligent, however, if left unexplained, the change of intention would leave the passenger wandering  what is happening: he was on the main road to the office and suddenly the car turned left. This would at minimum force him to question the car what's going on. Rather, a continuous degree of understanding can be maintained if the car intelligently detect such mismatch of intentions by computing its passenger expectations, and thus preemptively communicate the new selected paths whenever required. 

This seemingly simple process of communicating changes in the intention, looks simple but it is a quite difficult one. It requires to reason on what are the intentions of the user, and how and when they should be aligned with those of the car, either explicitly through a process of explanation, or implicitly through a behavior that is interpretable by human beings.  To support these capacities it is becoming apparent how intelligent agents should leverage how we commonly think about things, referred to as common sense reasoning. Common sense reasoning relates to how we form conjectures based on what we observe, and agents forming conjectures in the same way could be better collaborators rather than those reasoning in other ways. In this thesis we utilized an established model for common sense reasoning known as Theory of Mind, of which the thesis will contain a brief introduction. 

By leveraging Theory of Mind and classical agent architectures, this thesis provides an initial formulation of a novel computational architecture capable of aggregating multiple tasks from intentionality, such as intent recognition, informative communication of intention and interpretable behavior. We refer to this model as the Mirror Agent Model, because envisioning the agent as interacting with a mirrored copy of itself whenever supporting intentionality. Inside the model expressing and recognizing intentions are two faces of the same coin and represent a dual to each other. This represents a step forward towards the unification in a single framework of many tasks related to intentionality, that are at the moment considered mostly independently inside the literature.

The thesis will firstly provide introductory chapters on agents, intentions and theory of mind reasoning, followed by a chapter describing the developed computational models. This chapter conceptually aggregates many of the algorithms from the papers and aims at providing an initial formulation of the Mirror Agent Model. Finally, the thesis will conclude with a summary of contributions and concluding remarks.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2022. p. 80
Series
Report / UMINF, ISSN 0348-0542 ; 22.07
Keywords
agent, model, plan, action, human-robot interaction, robot, mirror agent model, intention, recognition, interpretable behavior, artificial intelligence
National Category
Robotics Computer Systems
Research subject
human-computer interaction; Computer Science
Identifiers
urn:nbn:se:umu:diva-198631 (URN)978-91-7855-768-4 (ISBN)978-91-7855-767-7 (ISBN)
Public defence
2022-09-16, NAT.D.360, Naturvetarhuset, Umeå, 13:15 (English)
Opponent
Supervisors
Available from: 2022-08-26 Created: 2022-08-16 Last updated: 2022-08-22Bibliographically approved

Open Access in DiVA

fulltext(491 kB)105 downloads
File information
File name FULLTEXT02.pdfFile size 491 kBChecksum SHA-512
9e033e30389ef918974553bb65fa43f92d15286b8684cc4540f9ff8fbd90f4c89e2286094c92f73bd6e80c301e4c2b075ab69dc846340a25a65db38f59491d65
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Persiani, MicheleHellström, Thomas

Search in DiVA

By author/editor
Persiani, MicheleHellström, Thomas
By organisation
Department of Computing Science
Robotics

Search outside of DiVA

GoogleGoogle Scholar
Total: 152 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 626 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf