Open this publication in new window or tab >>2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Detektering av känslomässiga reaktioner med förklarande AI för tidsserier
Abstract [en]
The exponential growth in the utilization of machine learning (ML) models has facilitated the development of decision-making systems, particularly in tasks such as affect detection. Affect-aware systems, which discern human affective states by blending emotion theories with practical engineering, have garnered significant attention. Implementing such systems entails various approaches. We focus on leveraging ML techniques to elucidate relations between affective states and multiple physiological indicators including sensory time series data. However, a notable technical challenge arises from the expensive design of existing models, particularly problematic in knowledge constrained environments. This thesis endeavors to address this challenge by proposing a meticulously crafted end-to-end deep learning model, drawing inspiration from the principles of decision explainability in affect detectors.
Explainable artificial intelligence (XAI) seeks to demystify the decision-making process of ML models, mitigating the "black box" effect stemming from their complex, non-linear structures. Enhanced transparency fosters trust among end-users and mitigates the risks associated with biased outcomes. Despite rapid advancements in XAI, particularly in visionary tasks, the methods employed are not readily applicable to time-series data, especially in affect detection tasks. This thesis thus aims to pioneer the fusion of XAI techniques with affect detection, with a dual objective: firstly, to render the decisions of affect detectors transparent through the introduction of a valid explainable model; and secondly, to assess the state of explainability in affect detection time series data by presenting a range of objective metrics.
In summary, this research carries significant practical implications, benefiting society at large. The proposed affect detector can not only be served as a benchmark in the field, but also perceived as a priori for related tasks such as depression detection. Our work further facilitates a full integration of the detector into real-world settings when coupled with the accompanying explainability tools. These tools can indeed be utilized in any decision-making domains where ML techniques are practiced on time series data. The findings of this research also spread awareness to scholars about carefully designing transparent systems.
Place, publisher, year, edition, pages
Umeå University, 2024. p. 57
Series
UMINF, ISSN 0348-0542 ; 24.06
Keywords
Explainable AI, Affect Detection, Time Series, Deep Convolutional Neural Network, Machine Learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-223857 (URN)978-91-8070-407-6 (ISBN)978-91-8070-408-3 (ISBN)
Public defence
2024-05-24, Hörsal MIT.A.121, 13:15 (English)
Opponent
Supervisors
2024-05-032024-04-292024-04-30Bibliographically approved