Umeå universitets logga

umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Visual Explanations for DNNs with Contextual Importance
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0002-1232-346x
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0002-8078-5172
2021 (Engelska)Ingår i: Explainable and Transparent AI and Multi-Agent Systems: Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers / [ed] Davide Calvaresi; Amro Najjar; Michael Winikoff; Kary Främling, Springer, 2021, Vol. 12688, s. 83-96Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Autonomous agents and robots with vision capabilities powered by machine learning algorithms such as Deep Neural Networks (DNNs) are taking place in many industrial environments. While DNNs have improved the accuracy in many prediction tasks, it is shown that even modest disturbances in their input produce erroneous results. Such errors have to be detected and dealt with for making the deployment of DNNs secure in real-world applications. Several explanation methods have been proposed to understand the inner workings of these models. In this paper, we present how Contextual Importance (CI) can make DNN results more explainable in an image classification task without peeking inside the network. We produce explanations for individual classifications by perturbing an input image through over-segmentation and evaluating the effect on a prediction score. Then the output highlights the most contributing segments for a prediction. Results are compared with two explanation methods, namely mask perturbation and LIME. The results for the MNIST hand-written digit dataset produced by the three methods show that CI provides better visual explainability.

Ort, förlag, år, upplaga, sidor
Springer, 2021. Vol. 12688, s. 83-96
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 12688
Nyckelord [en]
Contextual importance, Deep learning, Explainable artificial intelligence, Image classification
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:umu:diva-187096DOI: 10.1007/978-3-030-82017-6_6ISI: 000691781800006Scopus ID: 2-s2.0-85113329479ISBN: 978-3-030-82016-9 (tryckt)ISBN: 978-3-030-82017-6 (digital)OAI: oai:DiVA.org:umu-187096DiVA, id: diva2:1590317
Konferens
3rd International Workshop on Explainable, Transparent AI and Multi-Agent Systems, EXTRAAMAS 2021, Virtual, Online, May 3-7, 2021.
Anmärkning

Also part of the Lecture Notes in Artificial Intelligence book sub series (LNAI, volume 12688).

Tillgänglig från: 2021-09-02 Skapad: 2021-09-02 Senast uppdaterad: 2023-09-05Bibliografiskt granskad
Ingår i avhandling
1. Context-based explanations for machine learning predictions
Öppna denna publikation i ny flik eller fönster >>Context-based explanations for machine learning predictions
2022 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Alternativ titel[sv]
Kontextbaserade förklaringar för maskininlärningsförutsägelser
Abstract [en]

In recent years, growing concern regarding trust in algorithmic decision-making has drawn attention to more transparent and interpretable models. Laws and regulations are moving towards requiring this functionality from information systems to prevent unintended side effects. Such as the European Union's General Data Protection Regulations (GDPR) set out the right to be informed regarding machine-generated decisions. Individuals affected by these decisions can question, confront and challenge the inferences automatically produced by machine learning models. Consequently, such matters necessitate AI systems to be transparent and explainable for various practical applications.

Furthermore, explanations help evaluate these systems' strengths and limitations, thereby fostering trustworthiness. As important as it is, existing studies mainly focus on creating mathematically interpretable models or explaining black-box algorithms with intrinsically interpretable surrogate models. In general, these explanations are intended for technical users to evaluate the correctness of a model and are often hard to interpret by general users.  

Given a critical need for methods that consider end-user requirements, this thesis focuses on generating intelligible explanations for predictions made by machine learning algorithms. As a starting point, we present the outcome of a systematic literature review of the existing research on generating and communicating explanations in goal-driven eXplainable AI (XAI), such as agents and robots. These are known for their ability to communicate their decisions in human understandable terms. Influenced by that, we discuss the design and evaluation of our proposed explanation methods for black-box algorithms in different machine learning applications, including image recognition, scene classification, and disease prediction.

Taken together, the methods and tools presented in this thesis could be used to explain machine learning predictions or as a baseline to compare to other explanation techniques, enabling interpretation indicators for experts and non-technical users. The findings would also be of interest to domains using machine learning models for high-stake decision-making to investigate the practical utility of proposed explanation methods.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå University, 2022. s. 48
Serie
Report / UMINF, ISSN 0348-0542
Nyckelord
Explainable AI, explainability, interpretability, black-box models, deep learning, neural networks, contextual importance
Nationell ämneskategori
Datorsystem
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-198943 (URN)978-91-7855-859-9 (ISBN)978-91-7855-860-5 (ISBN)
Disputation
2022-09-26, NAT.D.320, Naturvetarhuset, Umeå, 08:30 (Engelska)
Opponent
Handledare
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Tillgänglig från: 2022-09-05 Skapad: 2022-08-29 Senast uppdaterad: 2022-08-30Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Anjomshoae, SuleJiang, LiliFrämling, Kary

Sök vidare i DiVA

Av författaren/redaktören
Anjomshoae, SuleJiang, LiliFrämling, Kary
Av organisationen
Institutionen för datavetenskap
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 460 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf