Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Ladder of intentions: unifying agent architectures for explainability and transferability
Barcelona Supercomputing Center, Barcelona, Spain.
Barcelona Supercomputing Center, Barcelona, Spain.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.ORCID iD: 0009-0001-3326-2643
Barcelona Supercomputing Center, Barcelona, Spain; Universitat Politecnica de Catalunya, Barcelona, Spain.
Show others and affiliations
2026 (English)In: Explainable, trustworthy, and responsible ai and multi-agent systems: 7th international workshop, EXTRAAMAS 2025, revised selected papers, Cham: Springer, 2026, p. 127-146Conference paper, Published paper (Refereed)
Abstract [en]

Within the field of Autonomous Agents, the predominant paradigm is that agents perceive, reflect, reason, and act on an environment, employing some specific decision mechanism to pick actions. Nonetheless, the process that originates the decisions may differ depending on the agent, as this paradigm is agnostic about its concrete action selection inference. However, the need for being able to explain these decisions is constantly increasing, and the heterogeneity of the internal processes of agents has resulted in different ad hoc techniques for each architecture, for providing explanations with disparate validation mechanisms, hindering efforts at comparing mechanisms.

To tackle this, in this contribution, we propose a unifying architecture framework based on causality, beliefs, and intentions. This framework allows for the examination of heterogeneous agents (from BDI and RL to LLM-based agents) without modification. This approach clearly decouples declarative and procedural knowledge, as well as designer-given versus learnt representations. It categorises what kind of questions can be answered by each agent reasoning component and allows a more seamless workflow for transferring knowledge between diverse agent architectures.

Place, publisher, year, edition, pages
Cham: Springer, 2026. p. 127-146
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15936
Keywords [en]
Agent Explainability, Agentic AI, BDI, Cognitive Architecture, Explainable Agency, Intentions, Knowledge representation, Knowledge Transfer, RL, Telic Explanations, XAI
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-246476DOI: 10.1007/978-3-032-01399-6_8Scopus ID: 2-s2.0-105020010827ISBN: 978-3-032-01398-9 (print)ISBN: 978-3-032-01399-6 (electronic)OAI: oai:DiVA.org:umu-246476DiVA, id: diva2:2014527
Conference
7th International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2025, Detroit, USA, May 19-20, 2025
Available from: 2025-11-18 Created: 2025-11-18 Last updated: 2025-11-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Edström, FilipBrännström, Mattias

Search in DiVA

By author/editor
Edström, FilipBrännström, Mattias
By organisation
StatisticsDepartment of Computing Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 85 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf