Umeå universitets logga

umu.sePublikationer
Ändra sökning
Avgränsa sökresultatet
1 - 2 av 2
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Westberg, Marcus
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Zelvelder, Amber
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Najjar, Amro
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    A Historical Perspective on Cognitive Science and Its Influence on XAI Research2019Ingår i: Explainable, Transparent Autonomous Agents and Multi-Agent Systems / [ed] Davide Calvaresi, Amro Najjar, Michael Schumacher, Kary Främling, Switzerland: Springer, 2019, s. 205-219Konferensbidrag (Refereegranskat)
    Abstract [en]

    Cognitive science and artificial intelligence are interconnected in that developments in one field can affect the framework of reference for research in the other. Changes in our understanding of how the human mind works inadvertently changes how we go about creating artificial minds. Similarly, successes and failures in AI can inspire new directions to be taken in cognitive science. This article explores the history of the mind in cognitive science in the last 50 years, and draw comparisons as to how this has affected AI research, and how AI research in turn has affected shifts in cognitive science. In particular, we look at explainable AI (XAI) and suggest that folk psychology is of particular interest for that area of research. In cognitive science, folk psychology is divided between two theories: theory-theory and simulation theory. We argue that it is important for XAI to recognise and understand this debate, and that reducing reliance on theory-theory by incorporating more simulationist frameworks into XAI could help further the field. We propose that such incorporation would involve robots employing more embodied cognitive processes when communicating with humans, highlighting the importance of bodily action in communication and mindreading.

  • 2.
    Zelvelder, Amber
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Westberg, Marcus
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Främling, Kary
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Assessing Explainability in Reinforcement Learning2021Ingår i: Explainable and Transparent AI and Multi-Agent Systems: Third International Workshop on Explainable, Transparent AI and Multi-Agent Systems, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers / [ed] Davide Calvaresi, Amro Najjar, Michael Winikoff, Kary Främling, Springer, 2021, Vol. 3, s. 223-240Konferensbidrag (Refereegranskat)
    Abstract [en]

    Reinforcement Learning performs well in many different application domains and is starting to receive greater authority and trust from its users. But most people are unfamiliar with how AIs make their decisions and many of them feel anxious about AI decision-making. A result of this is that AI methods suffer from trust issues and this hinders the full-scale adoption of them. In this paper we determine what the main application domains of Reinforcement Learning are, and to what extent research in those domains has explored explainability. This paper reviews examples of the most active application domains for Reinforcement Learning and suggest some guidelines to assess the importance of explainability for these applications. We present some key factors that should be included in evaluating these applications and show how these work with the examples found. By using these assessment criteria to evaluate the explainability needs for Reinforcement Learning, the research field can be guided to increasing transparency and trust through explanations.

1 - 2 av 2
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf