Umeå universitets logga

umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Exploring Contextual Importance and Utility in Explaining Affect Detection
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0002-9009-0999
Center for Applied Autonomous Sensor Systems, Örebro University.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0002-8078-5172
2021 (Engelska)Ingår i: AIxIA 2020 – Advances in Artificial Intelligence: XIXth International Conference of the Italian Association for Artificial Intelligence, Virtual Event, November 25–27, 2020, Revised Selected Papers, Springer, 2021, s. 3-18Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

By the ubiquitous usage of machine learning models with their inherent black-box nature, the necessity of explaining the decisions made by these models has become crucial. Although outcome explanation has been recently taken into account as a solution to the transparency issue in many areas, affect computing is one of the domains with the least dedicated effort on the practice of explainable AI, particularly over different machine learning models. The aim of this work is to evaluate the outcome explanations of two black-box models, namely neural network (NN) and linear discriminant analysis (LDA), to understand individuals affective states measured by wearable sensors. Emphasizing on context-aware decision explanations of these models, the two concepts of Contextual Importance (CI) and Contextual Utility (CU) are employed as a model-agnostic outcome explanation approach. We conduct our experiments on the two multimodal affect computing datasets, namely WESAD and MAHNOB-HCI. The results of applying a neural-based model on the first dataset reveal that the electrodermal activity, respiration as well as accelorometer sensors contribute significantly in the detection of “meditation” state for a particular participant. However, the respiration sensor does not intervene in the LDA decision of the same state. On the course of second dataset and the neural network model, the importance and utility of electrocardiogram and respiration sensors are shown as the dominant features in the detection of an individual “surprised” state, while the LDA model does not rely on the respiration sensor to detect this mental state.

Ort, förlag, år, upplaga, sidor
Springer, 2021. s. 3-18
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 12414
Nyckelord [en]
Affect detection, Black-Box decision, Contextual importance and utility, Explainable AI
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:umu:diva-183556DOI: 10.1007/978-3-030-77091-4_1ISI: 000886994000001Scopus ID: 2-s2.0-85111382491ISBN: 978-3-030-77090-7 (tryckt)ISBN: 978-3-030-77091-4 (digital)OAI: oai:DiVA.org:umu-183556DiVA, id: diva2:1557418
Konferens
AIxIA 2020, 19th International Conference of the Italian Association for Artificial Intelligence, Virtual Event, November 25–27, 2020
Anmärkning

Also part of the Lecture Notes in Artificial Intelligence book sub series (LNAI, volume 12414)

Tillgänglig från: 2021-05-25 Skapad: 2021-05-25 Senast uppdaterad: 2024-04-29Bibliografiskt granskad
Ingår i avhandling
1. Affect detection with explainable AI for time series
Öppna denna publikation i ny flik eller fönster >>Affect detection with explainable AI for time series
2024 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Alternativ titel[sv]
Detektering av känslomässiga reaktioner med förklarande AI för tidsserier
Abstract [en]

The exponential growth in the utilization of machine learning (ML) models has facilitated the development of decision-making systems, particularly in tasks such as affect detection. Affect-aware systems, which discern human affective states by blending emotion theories with practical engineering, have garnered significant attention. Implementing such systems entails various approaches. We focus on leveraging ML techniques to elucidate relations between affective states and multiple physiological indicators including sensory time series data. However, a notable technical challenge arises from the expensive design of existing models, particularly problematic in knowledge constrained environments. This thesis endeavors to address this challenge by proposing a meticulously crafted end-to-end deep learning model, drawing inspiration from the principles of decision explainability in affect detectors.

Explainable artificial intelligence (XAI) seeks to demystify the decision-making process of ML models, mitigating the "black box" effect stemming from their complex, non-linear structures. Enhanced transparency fosters trust among end-users and mitigates the risks associated with biased outcomes. Despite rapid advancements in XAI, particularly in visionary tasks, the methods employed are not readily applicable to time-series data, especially in affect detection tasks. This thesis thus aims to pioneer the fusion of XAI techniques with affect detection, with a dual objective: firstly, to render the decisions of affect detectors transparent through the introduction of a valid explainable model; and secondly, to assess the state of explainability in affect detection time series data by presenting a range of objective metrics. 

In summary, this research carries significant practical implications, benefiting society at large. The proposed affect detector can not only be served as a benchmark in the field, but also perceived as a priori for related tasks such as depression detection. Our work further facilitates a full integration of the detector into real-world settings when coupled with the accompanying explainability tools. These tools can indeed be utilized in any decision-making domains where ML techniques are practiced on time series data. The findings of this research also spread awareness to scholars about carefully designing transparent systems. 

Ort, förlag, år, upplaga, sidor
Umeå University, 2024. s. 57
Serie
UMINF, ISSN 0348-0542 ; 24.06
Nyckelord
Explainable AI, Affect Detection, Time Series, Deep Convolutional Neural Network, Machine Learning
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-223857 (URN)978-91-8070-407-6 (ISBN)978-91-8070-408-3 (ISBN)
Disputation
2024-05-24, Hörsal MIT.A.121, 13:15 (Engelska)
Opponent
Handledare
Tillgänglig från: 2024-05-03 Skapad: 2024-04-29 Senast uppdaterad: 2024-04-30Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Fouladgar, NazaninFrämling, Kary

Sök vidare i DiVA

Av författaren/redaktören
Fouladgar, NazaninFrämling, Kary
Av organisationen
Institutionen för datavetenskap
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 393 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf