Umeå University's logo

umu.sePublications
Change search
Refine search result
1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Westberg, Marcus
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Zelvelder, Amber
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Najjar, Amro
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A Historical Perspective on Cognitive Science and Its Influence on XAI Research2019In: Explainable, Transparent Autonomous Agents and Multi-Agent Systems / [ed] Davide Calvaresi, Amro Najjar, Michael Schumacher, Kary Främling, Switzerland: Springer, 2019, p. 205-219Conference paper (Refereed)
    Abstract [en]

    Cognitive science and artificial intelligence are interconnected in that developments in one field can affect the framework of reference for research in the other. Changes in our understanding of how the human mind works inadvertently changes how we go about creating artificial minds. Similarly, successes and failures in AI can inspire new directions to be taken in cognitive science. This article explores the history of the mind in cognitive science in the last 50 years, and draw comparisons as to how this has affected AI research, and how AI research in turn has affected shifts in cognitive science. In particular, we look at explainable AI (XAI) and suggest that folk psychology is of particular interest for that area of research. In cognitive science, folk psychology is divided between two theories: theory-theory and simulation theory. We argue that it is important for XAI to recognise and understand this debate, and that reducing reliance on theory-theory by incorporating more simulationist frameworks into XAI could help further the field. We propose that such incorporation would involve robots employing more embodied cognitive processes when communicating with humans, highlighting the importance of bodily action in communication and mindreading.

  • 2.
    Zelvelder, Amber
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Westberg, Marcus
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Främling, Kary
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Assessing Explainability in Reinforcement Learning2021In: Explainable and Transparent AI and Multi-Agent Systems: Third International Workshop on Explainable, Transparent AI and Multi-Agent Systems, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers / [ed] Davide Calvaresi, Amro Najjar, Michael Winikoff, Kary Främling, Springer, 2021, Vol. 3, p. 223-240Conference paper (Refereed)
    Abstract [en]

    Reinforcement Learning performs well in many different application domains and is starting to receive greater authority and trust from its users. But most people are unfamiliar with how AIs make their decisions and many of them feel anxious about AI decision-making. A result of this is that AI methods suffer from trust issues and this hinders the full-scale adoption of them. In this paper we determine what the main application domains of Reinforcement Learning are, and to what extent research in those domains has explored explainability. This paper reviews examples of the most active application domains for Reinforcement Learning and suggest some guidelines to assess the importance of explainability for these applications. We present some key factors that should be included in evaluating these applications and show how these work with the examples found. By using these assessment criteria to evaluate the explainability needs for Reinforcement Learning, the research field can be guided to increasing transparency and trust through explanations.

1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf