Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Affect detection with explainable AI for time series
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-9009-0999
2024 (English)Doctoral thesis, comprehensive summary (Other academic)Alternative title
Detektering av känslomässiga reaktioner med förklarande AI för tidsserier (Swedish)
Abstract [en]

The exponential growth in the utilization of machine learning (ML) models has facilitated the development of decision-making systems, particularly in tasks such as affect detection. Affect-aware systems, which discern human affective states by blending emotion theories with practical engineering, have garnered significant attention. Implementing such systems entails various approaches. We focus on leveraging ML techniques to elucidate relations between affective states and multiple physiological indicators including sensory time series data. However, a notable technical challenge arises from the expensive design of existing models, particularly problematic in knowledge constrained environments. This thesis endeavors to address this challenge by proposing a meticulously crafted end-to-end deep learning model, drawing inspiration from the principles of decision explainability in affect detectors.

Explainable artificial intelligence (XAI) seeks to demystify the decision-making process of ML models, mitigating the "black box" effect stemming from their complex, non-linear structures. Enhanced transparency fosters trust among end-users and mitigates the risks associated with biased outcomes. Despite rapid advancements in XAI, particularly in visionary tasks, the methods employed are not readily applicable to time-series data, especially in affect detection tasks. This thesis thus aims to pioneer the fusion of XAI techniques with affect detection, with a dual objective: firstly, to render the decisions of affect detectors transparent through the introduction of a valid explainable model; and secondly, to assess the state of explainability in affect detection time series data by presenting a range of objective metrics. 

In summary, this research carries significant practical implications, benefiting society at large. The proposed affect detector can not only be served as a benchmark in the field, but also perceived as a priori for related tasks such as depression detection. Our work further facilitates a full integration of the detector into real-world settings when coupled with the accompanying explainability tools. These tools can indeed be utilized in any decision-making domains where ML techniques are practiced on time series data. The findings of this research also spread awareness to scholars about carefully designing transparent systems. 

Place, publisher, year, edition, pages
Umeå University, 2024. , p. 57
Series
UMINF, ISSN 0348-0542 ; 24.06
Keywords [en]
Explainable AI, Affect Detection, Time Series, Deep Convolutional Neural Network, Machine Learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-223857ISBN: 978-91-8070-407-6 (print)ISBN: 978-91-8070-408-3 (electronic)OAI: oai:DiVA.org:umu-223857DiVA, id: diva2:1854993
Public defence
2024-05-24, Hörsal MIT.A.121, 13:15 (English)
Opponent
Supervisors
Available from: 2024-05-03 Created: 2024-04-29 Last updated: 2024-04-30Bibliographically approved
List of papers
1. Exploring Contextual Importance and Utility in Explaining Affect Detection
Open this publication in new window or tab >>Exploring Contextual Importance and Utility in Explaining Affect Detection
2021 (English)In: AIxIA 2020 – Advances in Artificial Intelligence: XIXth International Conference of the Italian Association for Artificial Intelligence, Virtual Event, November 25–27, 2020, Revised Selected Papers, Springer, 2021, p. 3-18Conference paper, Published paper (Refereed)
Abstract [en]

By the ubiquitous usage of machine learning models with their inherent black-box nature, the necessity of explaining the decisions made by these models has become crucial. Although outcome explanation has been recently taken into account as a solution to the transparency issue in many areas, affect computing is one of the domains with the least dedicated effort on the practice of explainable AI, particularly over different machine learning models. The aim of this work is to evaluate the outcome explanations of two black-box models, namely neural network (NN) and linear discriminant analysis (LDA), to understand individuals affective states measured by wearable sensors. Emphasizing on context-aware decision explanations of these models, the two concepts of Contextual Importance (CI) and Contextual Utility (CU) are employed as a model-agnostic outcome explanation approach. We conduct our experiments on the two multimodal affect computing datasets, namely WESAD and MAHNOB-HCI. The results of applying a neural-based model on the first dataset reveal that the electrodermal activity, respiration as well as accelorometer sensors contribute significantly in the detection of “meditation” state for a particular participant. However, the respiration sensor does not intervene in the LDA decision of the same state. On the course of second dataset and the neural network model, the importance and utility of electrocardiogram and respiration sensors are shown as the dominant features in the detection of an individual “surprised” state, while the LDA model does not rely on the respiration sensor to detect this mental state.

Place, publisher, year, edition, pages
Springer, 2021
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 12414
Keywords
Affect detection, Black-Box decision, Contextual importance and utility, Explainable AI
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-183556 (URN)10.1007/978-3-030-77091-4_1 (DOI)000886994000001 ()2-s2.0-85111382491 (Scopus ID)978-3-030-77090-7 (ISBN)978-3-030-77091-4 (ISBN)
Conference
AIxIA 2020, 19th International Conference of the Italian Association for Artificial Intelligence, Virtual Event, November 25–27, 2020
Note

Also part of the Lecture Notes in Artificial Intelligence book sub series (LNAI, volume 12414)

Available from: 2021-05-25 Created: 2021-05-25 Last updated: 2024-04-29Bibliographically approved
2. CN-waterfall: a deep convolutional neural network for multimodal physiological affect detection
Open this publication in new window or tab >>CN-waterfall: a deep convolutional neural network for multimodal physiological affect detection
2022 (English)In: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 34, no 3, p. 2157-2176Article in journal (Refereed) Published
Abstract [en]

Affective computing solutions, in the literature, mainly rely on machine learning methods designed to accurately detect human affective states. Nevertheless, many of the proposed methods are based on handcrafted features, requiring sufficient expert knowledge in the realm of signal processing. With the advent of deep learning methods, attention has turned toward reduced feature engineering and more end-to-end machine learning. However, most of the proposed models rely on late fusion in a multimodal context. Meanwhile, addressing interrelations between modalities for intermediate-level data representation has been largely neglected. In this paper, we propose a novel deep convolutional neural network, called CN-Waterfall, consisting of two modules: Base and General. While the Base module focuses on the low-level representation of data from each single modality, the General module provides further information, indicating relations between modalities in the intermediate- and high-level data representations. The latter module has been designed based on theoretically grounded concepts in the Explainable AI (XAI) domain, consisting of four different fusions. These fusions are mainly tailored to correlation- and non-correlation-based modalities. To validate our model, we conduct an exhaustive experiment on WESAD and MAHNOB-HCI, two publicly and academically available datasets in the context of multimodal affective computing. We demonstrate that our proposed model significantly improves the performance of physiological-based multimodal affect detection.

Place, publisher, year, edition, pages
Springer, 2022
Keywords
Multimodal affect detection, Deep convolutional neural network, Physiological-based sensors, Data fusion
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-187972 (URN)10.1007/s00521-021-06516-3 (DOI)000698886400003 ()2-s2.0-85115620535 (Scopus ID)
Funder
Knut and Alice Wallenberg Foundation
Available from: 2021-09-28 Created: 2021-09-28 Last updated: 2024-04-29Bibliographically approved
3. Metrics and Evaluations of Time Series Explanations: An Application in Affect Computing
Open this publication in new window or tab >>Metrics and Evaluations of Time Series Explanations: An Application in Affect Computing
2022 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 23995-24009Article in journal (Refereed) Published
Abstract [en]

Explainable artificial intelligence (XAI) has shed light on enormous applications by clarifying why neural models make specific decisions. However, it remains challenging to measure how sensitive XAI solutions are to the explanations of neural models. Although different evaluation metrics have been proposed to measure sensitivity, the main focus has been on the visual and textual data. There is insufficient attention devoted to the sensitivity metrics tailored for time series data. In this paper, we formulate several metrics, including max short-term sensitivity (MSS) , max long-term sensitivity (MLS) , average short-term sensitivity (ASS) and average long-term sensitivity (ALS) , that target the sensitivity of XAI models with respect to the generated and real time series. Our hypothesis is that for close series with the same labels, we obtain similar explanations. We evaluate three XAI models, LIME, integrated gradient (IG), and SmoothGrad (SG), on CN-Waterfall, a deep convolutional network. This network is a highly accurate time series classifier in affect computing. Our experiments rely on data- , metric- and XAI hyperparameter- related settings on the WESAD and MAHNOB-HCI datasets. The results reveal that (i) IG and LIME provide a lower sensitivity scale than SG in all the metrics and settings, potentially due to the lower scale of important scores generated by IG and LIME, (ii) the XAI models show higher sensitivities for a smaller window of data, (iii) the sensitivities of XAI models fluctuate when the network parameters and data properties change, and (iv) the XAI models provide unstable sensitivities under different settings of hyperparameters.

Place, publisher, year, edition, pages
IEEE, 2022
Keywords
Explainable AI, Metrics, Time series data, Deep convolutional neural network
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-192976 (URN)10.1109/ACCESS.2022.3155115 (DOI)000766548000001 ()2-s2.0-85125751693 (Scopus ID)
Available from: 2022-03-07 Created: 2022-03-07 Last updated: 2024-04-29Bibliographically approved
4. SSET: swapping–sliding explanation for timeseries classifiers in affect detection
Open this publication in new window or tab >>SSET: swapping–sliding explanation for timeseries classifiers in affect detection
(English)Manuscript (preprint) (Other academic)
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-223702 (URN)
Available from: 2024-04-24 Created: 2024-04-24 Last updated: 2024-04-29

Open Access in DiVA

spikblad(350 kB)73 downloads
File information
File name SPIKBLAD01.pdfFile size 350 kBChecksum SHA-512
a41ce34389a2d8fddedad6da17b16884174d57bb9d1791e4f9340526f9b8f6dcdac7a141b4c1e4a51b1ac9f156a6bfd28e8ca6da1bf501fa261fb7909e8b674c
Type spikbladMimetype application/pdf
fulltext(5888 kB)386 downloads
File information
File name FULLTEXT04.pdfFile size 5888 kBChecksum SHA-512
13851d9315b82fd07bad5d805723de1074e162435bcd96024f5ea3aaf565b04d0971557afa027034cccf8ce40f0cc0a7e70d73a8900227568ff6f05d5d15af4e
Type fulltextMimetype application/pdf

Authority records

Fouladgar, Nazanin

Search in DiVA

By author/editor
Fouladgar, Nazanin
By organisation
Department of Computing Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 388 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1210 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf