Explainable AI has recently paved the way to justify decisions made by black-box models in various areas. However, a mature body of work in the field of affect detection is still limited. In this work, we evaluate a black-box outcome explanation for understanding humans’ affective states. We employ two concepts of Contextual Importance (CI) and Contextual Utility (CU), emphasizing on a context-aware decision explanation of a non-linear model, mainly a neural network. The neural model is designed to detect the individual mental states measured by wearable sensors to monitor the human user’s well-being. We conduct our experiments and outcome explanation on WESAD and MAHNOBHCI, as multimodal affect computing datasets. The results reveal that in the first experiment the electrodermal activity, respiration as well as accelorometer and in the second experiment the electrocardiogram and respiration signals contribute significantly in the classification task of mental states for a specific participant. To the best of our knowledge, this is the first study leveraging the CI and CU concepts in outcome explanation of an affect detection model.