Umeå University's logo

umu.sePublikasjoner
Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Explanations of black-box model predictions by contextual importance and utility
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå University. (Explainable AI)ORCID-id: 0000-0002-1232-346X
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå University. (Explainable AI)ORCID-id: 0000-0002-8078-5172
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå University. (Explainable AI)
2019 (engelsk)Inngår i: Explainable, transparent autonomous agents and multi-agent systems: first international workshop, EXTRAAMAS 2019, Montreal, QC, Canada, May 13–14, 2019, revised selected papers / [ed] Davide Calvaresi, Amro Najjar, Michael Schumacher, Kary Främling, Springer, 2019, s. 95-109Kapittel i bok, del av antologi (Fagfellevurdert)
Abstract [en]

The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and trans- parent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods.

sted, utgiver, år, opplag, sider
Springer, 2019. s. 95-109
Serie
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 11763
Emneord [en]
Explainable AI, Black-box models, Contextual importance, Contextual utility, Contrastive explanations
HSV kategori
Forskningsprogram
datalogi
Identifikatorer
URN: urn:nbn:se:umu:diva-163549DOI: 10.1007/978-3-030-30391-4_6ISI: 000695367200006Scopus ID: 2-s2.0-85072851529ISBN: 9783030303907 (tryckt)ISBN: 9783030303914 (digital)OAI: oai:DiVA.org:umu-163549DiVA, id: diva2:1354494
Merknad

First International Workshop, EXTRAAMAS 2019, Montreal, QC, Canada, May 13–14, 2019

Tilgjengelig fra: 2019-09-25 Laget: 2019-09-25 Sist oppdatert: 2023-09-05bibliografisk kontrollert
Inngår i avhandling
1. Context-based explanations for machine learning predictions
Åpne denne publikasjonen i ny fane eller vindu >>Context-based explanations for machine learning predictions
2022 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Alternativ tittel[sv]
Kontextbaserade förklaringar för maskininlärningsförutsägelser
Abstract [en]

In recent years, growing concern regarding trust in algorithmic decision-making has drawn attention to more transparent and interpretable models. Laws and regulations are moving towards requiring this functionality from information systems to prevent unintended side effects. Such as the European Union's General Data Protection Regulations (GDPR) set out the right to be informed regarding machine-generated decisions. Individuals affected by these decisions can question, confront and challenge the inferences automatically produced by machine learning models. Consequently, such matters necessitate AI systems to be transparent and explainable for various practical applications.

Furthermore, explanations help evaluate these systems' strengths and limitations, thereby fostering trustworthiness. As important as it is, existing studies mainly focus on creating mathematically interpretable models or explaining black-box algorithms with intrinsically interpretable surrogate models. In general, these explanations are intended for technical users to evaluate the correctness of a model and are often hard to interpret by general users.  

Given a critical need for methods that consider end-user requirements, this thesis focuses on generating intelligible explanations for predictions made by machine learning algorithms. As a starting point, we present the outcome of a systematic literature review of the existing research on generating and communicating explanations in goal-driven eXplainable AI (XAI), such as agents and robots. These are known for their ability to communicate their decisions in human understandable terms. Influenced by that, we discuss the design and evaluation of our proposed explanation methods for black-box algorithms in different machine learning applications, including image recognition, scene classification, and disease prediction.

Taken together, the methods and tools presented in this thesis could be used to explain machine learning predictions or as a baseline to compare to other explanation techniques, enabling interpretation indicators for experts and non-technical users. The findings would also be of interest to domains using machine learning models for high-stake decision-making to investigate the practical utility of proposed explanation methods.

sted, utgiver, år, opplag, sider
Umeå: Umeå University, 2022. s. 48
Serie
Report / UMINF, ISSN 0348-0542
Emneord
Explainable AI, explainability, interpretability, black-box models, deep learning, neural networks, contextual importance
HSV kategori
Forskningsprogram
datalogi
Identifikatorer
urn:nbn:se:umu:diva-198943 (URN)978-91-7855-859-9 (ISBN)978-91-7855-860-5 (ISBN)
Disputas
2022-09-26, NAT.D.320, Naturvetarhuset, Umeå, 08:30 (engelsk)
Opponent
Veileder
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Tilgjengelig fra: 2022-09-05 Laget: 2022-08-29 Sist oppdatert: 2022-08-30bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Anjomshoae, SuleFrämling, KaryNajjar, Amro

Søk i DiVA

Av forfatter/redaktør
Anjomshoae, SuleFrämling, KaryNajjar, Amro
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric

doi
isbn
urn-nbn
Totalt: 1013 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf