Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Explaining graph convolutional network predictions for clinicians: an explainable AI approach to Alzheimer’s disease classification
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University.ORCID iD: 0000-0002-1232-346x
Umeå University, Faculty of Medicine, Department of Integrative Medical Biology (IMB). Umeå University, Faculty of Medicine, Umeå Centre for Functional Brain Imaging (UFBI).ORCID iD: 0000-0001-9512-3289
2024 (English)In: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 6, article id 1334613Article in journal (Refereed) Published
Abstract [en]

Introduction: Graph-based representations are becoming more common in the medical domain, where each node defines a patient, and the edges signify associations between patients, relating individuals with disease and symptoms in a node classification task. In this study, a Graph Convolutional Networks (GCN) model was utilized to capture differences in neurocognitive, genetic, and brain atrophy patterns that can predict cognitive status, ranging from Normal Cognition (NC) to Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD), on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Elucidating model predictions is vital in medical applications to promote clinical adoption and establish physician trust. Therefore, we introduce a decomposition-based explanation method for individual patient classification.

Methods: Our method involves analyzing the output variations resulting from decomposing input values, which allows us to determine the degree of impact on the prediction. Through this process, we gain insight into how each feature from various modalities, both at the individual and group levels, contributes to the diagnostic result. Given that graph data contains critical information in edges, we studied relational data by silencing all the edges of a particular class, thereby obtaining explanations at the neighborhood level.

Results: Our functional evaluation showed that the explanations remain stable with minor changes in input values, specifically for edge weights exceeding 0.80. Additionally, our comparative analysis against SHAP values yielded comparable results with significantly reduced computational time. To further validate the model's explanations, we conducted a survey study with 11 domain experts. The majority (71%) of the responses confirmed the correctness of the explanations, with a rating of above six on a 10-point scale for the understandability of the explanations.

Discussion: Strategies to overcome perceived limitations, such as the GCN's overreliance on demographic information, were discussed to facilitate future adoption into clinical practice and gain clinicians' trust as a diagnostic decision support system.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2024. Vol. 6, article id 1334613
Keywords [en]
explainable AI, multimodal data, graph convolutional networks, Alzheimer's disease, node classification
National Category
Geriatrics Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:umu:diva-198788DOI: 10.3389/frai.2023.1334613ISI: 001152933100001PubMedID: 38259822Scopus ID: 2-s2.0-85182673168OAI: oai:DiVA.org:umu-198788DiVA, id: diva2:1689844
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Alzheimerfonden
Note

Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (http://adni.loni.usc.edu/).

Available from: 2022-08-24 Created: 2022-08-24 Last updated: 2024-02-13Bibliographically approved
In thesis
1. Context-based explanations for machine learning predictions
Open this publication in new window or tab >>Context-based explanations for machine learning predictions
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Kontextbaserade förklaringar för maskininlärningsförutsägelser
Abstract [en]

In recent years, growing concern regarding trust in algorithmic decision-making has drawn attention to more transparent and interpretable models. Laws and regulations are moving towards requiring this functionality from information systems to prevent unintended side effects. Such as the European Union's General Data Protection Regulations (GDPR) set out the right to be informed regarding machine-generated decisions. Individuals affected by these decisions can question, confront and challenge the inferences automatically produced by machine learning models. Consequently, such matters necessitate AI systems to be transparent and explainable for various practical applications.

Furthermore, explanations help evaluate these systems' strengths and limitations, thereby fostering trustworthiness. As important as it is, existing studies mainly focus on creating mathematically interpretable models or explaining black-box algorithms with intrinsically interpretable surrogate models. In general, these explanations are intended for technical users to evaluate the correctness of a model and are often hard to interpret by general users.  

Given a critical need for methods that consider end-user requirements, this thesis focuses on generating intelligible explanations for predictions made by machine learning algorithms. As a starting point, we present the outcome of a systematic literature review of the existing research on generating and communicating explanations in goal-driven eXplainable AI (XAI), such as agents and robots. These are known for their ability to communicate their decisions in human understandable terms. Influenced by that, we discuss the design and evaluation of our proposed explanation methods for black-box algorithms in different machine learning applications, including image recognition, scene classification, and disease prediction.

Taken together, the methods and tools presented in this thesis could be used to explain machine learning predictions or as a baseline to compare to other explanation techniques, enabling interpretation indicators for experts and non-technical users. The findings would also be of interest to domains using machine learning models for high-stake decision-making to investigate the practical utility of proposed explanation methods.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2022. p. 48
Series
Report / UMINF, ISSN 0348-0542
Keywords
Explainable AI, explainability, interpretability, black-box models, deep learning, neural networks, contextual importance
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-198943 (URN)978-91-7855-859-9 (ISBN)978-91-7855-860-5 (ISBN)
Public defence
2022-09-26, NAT.D.320, Naturvetarhuset, Umeå, 08:30 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2022-09-05 Created: 2022-08-29 Last updated: 2022-08-30Bibliographically approved

Open Access in DiVA

fulltext(3078 kB)29 downloads
File information
File name FULLTEXT01.pdfFile size 3078 kBChecksum SHA-512
ecd349d3d10c29f86b5b06656ab44e3f5960918ec60565b15a96f4d52bb67625b0d91e5c81c321931db4b73ed9f4cbfe1343b50de6ea1a0cd2cf2e8fab39c4cd
Type fulltextMimetype application/pdf

Other links

Publisher's full textPubMedScopus

Authority records

Anjomshoae, SulePudas, Sara

Search in DiVA

By author/editor
Anjomshoae, SulePudas, Sara
By organisation
Department of Computing ScienceDepartment of Integrative Medical Biology (IMB)Umeå Centre for Functional Brain Imaging (UFBI)
In the same journal
Frontiers in Artificial Intelligence
GeriatricsComputer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 29 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 441 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf