Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Metrics and Evaluations of Time Series Explanations: An Application in Affect Computing
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-9009-0999
Centre for Applied Autonomous Sensor Systems (AASS), Örebro, Sweden.
Umeå University, Faculty of Science and Technology, Department of Computing Science. Aalto University, School of Science and Technology, Finland.ORCID iD: 0000-0002-8078-5172
2022 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 23995-24009Article in journal (Refereed) Published
Abstract [en]

Explainable artificial intelligence (XAI) has shed light on enormous applications by clarifying why neural models make specific decisions. However, it remains challenging to measure how sensitive XAI solutions are to the explanations of neural models. Although different evaluation metrics have been proposed to measure sensitivity, the main focus has been on the visual and textual data. There is insufficient attention devoted to the sensitivity metrics tailored for time series data. In this paper, we formulate several metrics, including max short-term sensitivity (MSS) , max long-term sensitivity (MLS) , average short-term sensitivity (ASS) and average long-term sensitivity (ALS) , that target the sensitivity of XAI models with respect to the generated and real time series. Our hypothesis is that for close series with the same labels, we obtain similar explanations. We evaluate three XAI models, LIME, integrated gradient (IG), and SmoothGrad (SG), on CN-Waterfall, a deep convolutional network. This network is a highly accurate time series classifier in affect computing. Our experiments rely on data- , metric- and XAI hyperparameter- related settings on the WESAD and MAHNOB-HCI datasets. The results reveal that (i) IG and LIME provide a lower sensitivity scale than SG in all the metrics and settings, potentially due to the lower scale of important scores generated by IG and LIME, (ii) the XAI models show higher sensitivities for a smaller window of data, (iii) the sensitivities of XAI models fluctuate when the network parameters and data properties change, and (iv) the XAI models provide unstable sensitivities under different settings of hyperparameters.

Place, publisher, year, edition, pages
IEEE, 2022. Vol. 10, p. 23995-24009
Keywords [en]
Explainable AI, Metrics, Time series data, Deep convolutional neural network
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-192976DOI: 10.1109/ACCESS.2022.3155115ISI: 000766548000001Scopus ID: 2-s2.0-85125751693OAI: oai:DiVA.org:umu-192976DiVA, id: diva2:1642777
Available from: 2022-03-07 Created: 2022-03-07 Last updated: 2024-04-29Bibliographically approved
In thesis
1. Affect detection with explainable AI for time series
Open this publication in new window or tab >>Affect detection with explainable AI for time series
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Detektering av känslomässiga reaktioner med förklarande AI för tidsserier
Abstract [en]

The exponential growth in the utilization of machine learning (ML) models has facilitated the development of decision-making systems, particularly in tasks such as affect detection. Affect-aware systems, which discern human affective states by blending emotion theories with practical engineering, have garnered significant attention. Implementing such systems entails various approaches. We focus on leveraging ML techniques to elucidate relations between affective states and multiple physiological indicators including sensory time series data. However, a notable technical challenge arises from the expensive design of existing models, particularly problematic in knowledge constrained environments. This thesis endeavors to address this challenge by proposing a meticulously crafted end-to-end deep learning model, drawing inspiration from the principles of decision explainability in affect detectors.

Explainable artificial intelligence (XAI) seeks to demystify the decision-making process of ML models, mitigating the "black box" effect stemming from their complex, non-linear structures. Enhanced transparency fosters trust among end-users and mitigates the risks associated with biased outcomes. Despite rapid advancements in XAI, particularly in visionary tasks, the methods employed are not readily applicable to time-series data, especially in affect detection tasks. This thesis thus aims to pioneer the fusion of XAI techniques with affect detection, with a dual objective: firstly, to render the decisions of affect detectors transparent through the introduction of a valid explainable model; and secondly, to assess the state of explainability in affect detection time series data by presenting a range of objective metrics. 

In summary, this research carries significant practical implications, benefiting society at large. The proposed affect detector can not only be served as a benchmark in the field, but also perceived as a priori for related tasks such as depression detection. Our work further facilitates a full integration of the detector into real-world settings when coupled with the accompanying explainability tools. These tools can indeed be utilized in any decision-making domains where ML techniques are practiced on time series data. The findings of this research also spread awareness to scholars about carefully designing transparent systems. 

Place, publisher, year, edition, pages
Umeå University, 2024. p. 57
Series
UMINF, ISSN 0348-0542 ; 24.06
Keywords
Explainable AI, Affect Detection, Time Series, Deep Convolutional Neural Network, Machine Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-223857 (URN)978-91-8070-407-6 (ISBN)978-91-8070-408-3 (ISBN)
Public defence
2024-05-24, Hörsal MIT.A.121, 13:15 (English)
Opponent
Supervisors
Available from: 2024-05-03 Created: 2024-04-29 Last updated: 2024-04-30Bibliographically approved

Open Access in DiVA

fulltext(4564 kB)1526 downloads
File information
File name FULLTEXT02.pdfFile size 4564 kBChecksum SHA-512
bfea63d9f291b071965c52e78065712fd27f89b000a244a0e370d2ee694429465b6df6c56985c491784d637fed66a4871492360b11689e9193af587660d2d1e6
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Fouladgar, NazaninFrämling, Kary

Search in DiVA

By author/editor
Fouladgar, NazaninFrämling, Kary
By organisation
Department of Computing Science
In the same journal
IEEE Access
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 1535 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 949 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf