Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Decision Explanation: Applying Contextual Importance and Contextual Utility in Affect Detection
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-9009-0999
Center for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
Umeå University, Faculty of Science and Technology, Department of Computing Science. Aalto University, School of Science and Technology, Finland.ORCID iD: 0000-0002-8078-5172
2020 (English)In: Proceedings of the Italian Workshop on Explainable Artificial Intelligence / [ed] Cataldo Musto, Daniele Magazzeni, Salvatore Ruggieri, Giovanni Semeraro, Technical University of Aachen , 2020, p. 1-13Conference paper, Published paper (Refereed)
Abstract [en]

Explainable AI has recently paved the way to justify decisions made by black-box models in various areas. However, a mature body of work in the field of affect detection is still limited. In this work, we evaluate a black-box outcome explanation for understanding humans’ affective states. We employ two concepts of Contextual Importance (CI) and Contextual Utility (CU), emphasizing on a context-aware decision explanation of a non-linear model, mainly a neural network. The neural model is designed to detect the individual mental states measured by wearable sensors to monitor the human user’s well-being. We conduct our experiments and outcome explanation on WESAD and MAHNOBHCI, as multimodal affect computing datasets. The results reveal that in the first experiment the electrodermal activity, respiration as well as accelorometer and in the second experiment the electrocardiogram and respiration signals contribute significantly in the classification task of mental states for a specific participant. To the best of our knowledge, this is the first study leveraging the CI and CU concepts in outcome explanation of an affect detection model.

Place, publisher, year, edition, pages
Technical University of Aachen , 2020. p. 1-13
Series
CEUR Workshop Proceedings, ISSN 1613-0073 ; 2742
Keywords [en]
Explainable AI, Affect detection, Black-Box decision, Contextual Importance and Utility
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-176746OAI: oai:DiVA.org:umu-176746DiVA, id: diva2:1501335
Conference
Italian Workshop on Explainable Artificial Intelligence, XAI.it 2020, co-located with 19th International Conference of the Italian Association for Artificial Intelligence (AIxIA 2020), Online Event, November 25-26, 2020
Available from: 2020-11-16 Created: 2020-11-16 Last updated: 2020-12-07Bibliographically approved

Open Access in DiVA

fulltext(1071 kB)250 downloads
File information
File name FULLTEXT01.pdfFile size 1071 kBChecksum SHA-512
1eddebe44ad1c747ba93d1f1e40498fd5d909572ce2f1ea216a941485ee7e4c17e1314a35538821894c71410fdeda557662570a00993cf662d1fcf55e7edfc06
Type fulltextMimetype application/pdf

Other links

URL

Authority records

Fouladgar, NazaninFrämling, Kary

Search in DiVA

By author/editor
Fouladgar, NazaninFrämling, Kary
By organisation
Department of Computing Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 250 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 1034 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf