umu.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Explanations of black-box model predictions by contextual importance and utility
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå University. (Explainable AI)
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå University. (Explainable AI)ORCID-id: 0000-0002-8078-5172
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå University. (Explainable AI)
2019 (engelsk)Inngår i: Explainable, transparent autonomous agents and multi-agent systems: first international workshop, EXTRAAMAS 2019, Montreal, QC, Canada, May 13–14, 2019, revised selected papers / [ed] Davide Calvaresi, Amro Najjar, Michael Schumacher, Kary Främling, Springer, 2019, s. 95-109Kapittel i bok, del av antologi (Fagfellevurdert)
Abstract [en]

The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and trans- parent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods.

sted, utgiver, år, opplag, sider
Springer, 2019. s. 95-109
Serie
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 11763
Emneord [en]
Explainable AI, Black-box models, Contextual importance, Contextual utility, Contrastive explanations
HSV kategori
Forskningsprogram
datalogi
Identifikatorer
URN: urn:nbn:se:umu:diva-163549DOI: 10.1007/978-3-030-30391-4_6ISBN: 9783030303907 (tryckt)ISBN: 9783030303914 (digital)OAI: oai:DiVA.org:umu-163549DiVA, id: diva2:1354494
Merknad

First International Workshop, EXTRAAMAS 2019, Montreal, QC, Canada, May 13–14, 2019

Tilgjengelig fra: 2019-09-25 Laget: 2019-09-25 Sist oppdatert: 2019-09-25bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekst

Personposter BETA

Anjomshoae, SuleFrämling, KaryNajjar, Amro

Søk i DiVA

Av forfatter/redaktør
Anjomshoae, SuleFrämling, KaryNajjar, Amro
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 496 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf