Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Visual Explanations for DNNs with Contextual Importance
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-1232-346x
Umeå University, Faculty of Science and Technology, Department of Computing Science.
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-8078-5172
2021 (English)In: Explainable and Transparent AI and Multi-Agent Systems: Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers / [ed] Davide Calvaresi; Amro Najjar; Michael Winikoff; Kary Främling, Springer, 2021, Vol. 12688, p. 83-96Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous agents and robots with vision capabilities powered by machine learning algorithms such as Deep Neural Networks (DNNs) are taking place in many industrial environments. While DNNs have improved the accuracy in many prediction tasks, it is shown that even modest disturbances in their input produce erroneous results. Such errors have to be detected and dealt with for making the deployment of DNNs secure in real-world applications. Several explanation methods have been proposed to understand the inner workings of these models. In this paper, we present how Contextual Importance (CI) can make DNN results more explainable in an image classification task without peeking inside the network. We produce explanations for individual classifications by perturbing an input image through over-segmentation and evaluating the effect on a prediction score. Then the output highlights the most contributing segments for a prediction. Results are compared with two explanation methods, namely mask perturbation and LIME. The results for the MNIST hand-written digit dataset produced by the three methods show that CI provides better visual explainability.

Place, publisher, year, edition, pages
Springer, 2021. Vol. 12688, p. 83-96
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 12688
Keywords [en]
Contextual importance, Deep learning, Explainable artificial intelligence, Image classification
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-187096DOI: 10.1007/978-3-030-82017-6_6ISI: 000691781800006Scopus ID: 2-s2.0-85113329479ISBN: 978-3-030-82016-9 (print)ISBN: 978-3-030-82017-6 (electronic)OAI: oai:DiVA.org:umu-187096DiVA, id: diva2:1590317
Conference
3rd International Workshop on Explainable, Transparent AI and Multi-Agent Systems, EXTRAAMAS 2021, Virtual, Online, May 3-7, 2021.
Note

Also part of the Lecture Notes in Artificial Intelligence book sub series (LNAI, volume 12688).

Available from: 2021-09-02 Created: 2021-09-02 Last updated: 2023-09-05Bibliographically approved
In thesis
1. Context-based explanations for machine learning predictions
Open this publication in new window or tab >>Context-based explanations for machine learning predictions
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Kontextbaserade förklaringar för maskininlärningsförutsägelser
Abstract [en]

In recent years, growing concern regarding trust in algorithmic decision-making has drawn attention to more transparent and interpretable models. Laws and regulations are moving towards requiring this functionality from information systems to prevent unintended side effects. Such as the European Union's General Data Protection Regulations (GDPR) set out the right to be informed regarding machine-generated decisions. Individuals affected by these decisions can question, confront and challenge the inferences automatically produced by machine learning models. Consequently, such matters necessitate AI systems to be transparent and explainable for various practical applications.

Furthermore, explanations help evaluate these systems' strengths and limitations, thereby fostering trustworthiness. As important as it is, existing studies mainly focus on creating mathematically interpretable models or explaining black-box algorithms with intrinsically interpretable surrogate models. In general, these explanations are intended for technical users to evaluate the correctness of a model and are often hard to interpret by general users.  

Given a critical need for methods that consider end-user requirements, this thesis focuses on generating intelligible explanations for predictions made by machine learning algorithms. As a starting point, we present the outcome of a systematic literature review of the existing research on generating and communicating explanations in goal-driven eXplainable AI (XAI), such as agents and robots. These are known for their ability to communicate their decisions in human understandable terms. Influenced by that, we discuss the design and evaluation of our proposed explanation methods for black-box algorithms in different machine learning applications, including image recognition, scene classification, and disease prediction.

Taken together, the methods and tools presented in this thesis could be used to explain machine learning predictions or as a baseline to compare to other explanation techniques, enabling interpretation indicators for experts and non-technical users. The findings would also be of interest to domains using machine learning models for high-stake decision-making to investigate the practical utility of proposed explanation methods.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2022. p. 48
Series
Report / UMINF, ISSN 0348-0542
Keywords
Explainable AI, explainability, interpretability, black-box models, deep learning, neural networks, contextual importance
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-198943 (URN)978-91-7855-859-9 (ISBN)978-91-7855-860-5 (ISBN)
Public defence
2022-09-26, NAT.D.320, Naturvetarhuset, Umeå, 08:30 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2022-09-05 Created: 2022-08-29 Last updated: 2022-08-30Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Anjomshoae, SuleJiang, LiliFrämling, Kary

Search in DiVA

By author/editor
Anjomshoae, SuleJiang, LiliFrämling, Kary
By organisation
Department of Computing Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 389 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf