Umeå universitets logga

umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
The mirror agent model: a Bayesian architecture for interpretable agent behavior
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. (Intelligent Robotics)
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0001-7242-2200
2022 (Engelska)Ingår i: Explainable and transparent AI and multi-agent systems: 4th international workshop, EXTRAMAAS 2022, virtual event, may 9–10, 2022, revised selected papers / [ed] Davide Calvaresi; Amro Najjar; Michael Winikoff; Kary Främling, Springer Nature, 2022, s. 111-123Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Abstract [en]

In this paper we illustrate a novel architecture generating interpretable behavior and explanations. We refer to this architecture as the Mirror Agent Model because it defines the observer model, that is the target of explicit and implicit communications, as a mirror of the agent's. With the goal of providing a general understanding of this work, we firstly show prior relevant results addressing the informative communication of agents intentions and the production of legible behavior. In the second part of the paper we furnish the architecture with novel capabilities for explanations through off-the-shelf saliency methods, followed by preliminary qualitative results.

Ort, förlag, år, upplaga, sidor
Springer Nature, 2022. s. 111-123
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13283
Nyckelord [en]
Interpretability, Explainability, Bayesian networks, Mirror Agent Model
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:umu:diva-194479DOI: 10.1007/978-3-031-15565-9_7ISI: 000870042100007Scopus ID: 2-s2.0-85140488434ISBN: 978-3-031-15564-2 (tryckt)ISBN: 978-3-031-15565-9 (digital)OAI: oai:DiVA.org:umu-194479DiVA, id: diva2:1656416
Konferens
4th International Workshop on EXplainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2022, Virtual event, May 9-10, 2022
Tillgänglig från: 2022-05-05 Skapad: 2022-05-05 Senast uppdaterad: 2022-11-10Bibliografiskt granskad
Ingår i avhandling
1. Expressing and recognizing intentions
Öppna denna publikation i ny flik eller fönster >>Expressing and recognizing intentions
2022 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Alternativ titel[sv]
Uttrycka och känna igen avsikter
Abstract [en]

With the advancement of Artificial Intelligence, intelligent computer programs known as agents are coming increasingly close to the life of human beings. In an optimistic view, predictions tell us that agents will be governing many machines such as phones, robots and cars. Agents will play an important role in our daily life, and because of this, it is becoming more and more relevant to provide them the capacity of interacting with their human users without requiring prior expert training, bur rather supporting understanding through common sense reasoning. Striving towards this objective, one important aspect of agents design is intentionality, that relates to their capacity of understanding goals and plans of their users, and to make theirs understood. The ability to reason on goals and objectives of themselves and others is especially important if the agents are autonomous such as in autonomous robots, because enabling them to interact with other agents by relying on their sensor data and internal computations only, and not on explicit data provided by their designer. 

Intentionality imbues agents with additional capacities supporting cooperative activities: if a helpful agent recognize an intention, it could proactively actuate interventions in an helpful way, such as giving advice. Alternatively, whenever the agent detects that its user is not understanding its objective it could communicate the mismatching information. As an illustrative example, let's consider the case of an autonomous car with a passenger on the road to his office. After checking the online maps the car identifies a traffic jam ahead, and to avoid it, autonomously decides to change its path mid-way by taking a less populated road. The behavior of the car is quite intelligent, however, if left unexplained, the change of intention would leave the passenger wandering  what is happening: he was on the main road to the office and suddenly the car turned left. This would at minimum force him to question the car what's going on. Rather, a continuous degree of understanding can be maintained if the car intelligently detect such mismatch of intentions by computing its passenger expectations, and thus preemptively communicate the new selected paths whenever required. 

This seemingly simple process of communicating changes in the intention, looks simple but it is a quite difficult one. It requires to reason on what are the intentions of the user, and how and when they should be aligned with those of the car, either explicitly through a process of explanation, or implicitly through a behavior that is interpretable by human beings.  To support these capacities it is becoming apparent how intelligent agents should leverage how we commonly think about things, referred to as common sense reasoning. Common sense reasoning relates to how we form conjectures based on what we observe, and agents forming conjectures in the same way could be better collaborators rather than those reasoning in other ways. In this thesis we utilized an established model for common sense reasoning known as Theory of Mind, of which the thesis will contain a brief introduction. 

By leveraging Theory of Mind and classical agent architectures, this thesis provides an initial formulation of a novel computational architecture capable of aggregating multiple tasks from intentionality, such as intent recognition, informative communication of intention and interpretable behavior. We refer to this model as the Mirror Agent Model, because envisioning the agent as interacting with a mirrored copy of itself whenever supporting intentionality. Inside the model expressing and recognizing intentions are two faces of the same coin and represent a dual to each other. This represents a step forward towards the unification in a single framework of many tasks related to intentionality, that are at the moment considered mostly independently inside the literature.

The thesis will firstly provide introductory chapters on agents, intentions and theory of mind reasoning, followed by a chapter describing the developed computational models. This chapter conceptually aggregates many of the algorithms from the papers and aims at providing an initial formulation of the Mirror Agent Model. Finally, the thesis will conclude with a summary of contributions and concluding remarks.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå University, 2022. s. 80
Serie
Report / UMINF, ISSN 0348-0542 ; 22.07
Nyckelord
agent, model, plan, action, human-robot interaction, robot, mirror agent model, intention, recognition, interpretable behavior, artificial intelligence
Nationell ämneskategori
Robotteknik och automation Datorsystem
Forskningsämne
människa-datorinteraktion; datalogi
Identifikatorer
urn:nbn:se:umu:diva-198631 (URN)978-91-7855-768-4 (ISBN)978-91-7855-767-7 (ISBN)
Disputation
2022-09-16, NAT.D.360, Naturvetarhuset, Umeå, 13:15 (Engelska)
Opponent
Handledare
Tillgänglig från: 2022-08-26 Skapad: 2022-08-16 Senast uppdaterad: 2022-08-22Bibliografiskt granskad

Open Access i DiVA

fulltext(555 kB)142 nedladdningar
Filinformation
Filnamn FULLTEXT02.pdfFilstorlek 555 kBChecksumma SHA-512
e3691ad352b95992ae019f9eaa45fbcef61adde765a231cdd7cdb8aa2b6adf9fc153c2715f19c6fa08abd2f1ba3605f27618bb9494e7b7ad29b20469bb2675f4
Typ fulltextMimetyp application/pdf

Övriga länkar

Förlagets fulltextScopus

Person

Persiani, MicheleHellström, Thomas

Sök vidare i DiVA

Av författaren/redaktören
Persiani, MicheleHellström, Thomas
Av organisationen
Institutionen för datavetenskap
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 142 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 253 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf