Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 83) Show all publications
Calvaresi, D., Najjar, A., Omicini, A. & Främling, K. (2026). Preface. In: Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Simona Tiribelli; Kary Främling (Ed.), Explainable, trustworthy,and responsible AI and multi-agent systems: 7th International workshop, EXTRAAMAS 2025. Springer Science+Business Media B.V.
Open this publication in new window or tab >>Preface
2026 (English)In: Explainable, trustworthy,and responsible AI and multi-agent systems: 7th International workshop, EXTRAAMAS 2025 / [ed] Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Simona Tiribelli; Kary Främling, Springer Science+Business Media B.V., 2026Chapter in book (Other academic)
Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2026
Series
Lecture Notes in Artificial Intelligence, ISSN 0302-9743, E-ISSN 1611-3349 ; 15936
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-246074 (URN)2-s2.0-105020094471 (Scopus ID)978-3-032-01398-9 (ISBN)978-3-032-01399-6 (ISBN)
Available from: 2025-11-24 Created: 2025-11-24 Last updated: 2025-11-24Bibliographically approved
Främling, K. (2026). Social explainable AI: what is it and how to make it happen with CIU?. In: Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Simona Tiribelli; Kary Främling (Ed.), Explainable, trustworthy, and responsible AI and multi-agent Ssystems: 7th international workshop, EXTRAAMAS 2025, revised selected papers. Paper presented at 7th International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2025, Detroit, MI, USA, May 19–20, 2025. (pp. 58-63). Springer Science+Business Media B.V.
Open this publication in new window or tab >>Social explainable AI: what is it and how to make it happen with CIU?
2026 (English)In: Explainable, trustworthy, and responsible AI and multi-agent Ssystems: 7th international workshop, EXTRAAMAS 2025, revised selected papers / [ed] Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Simona Tiribelli; Kary Främling, Springer Science+Business Media B.V., 2026, p. 58-63Conference paper, Published paper (Refereed)
Abstract [en]

Current eXplainable AI (XAI) methods tend to provide only one-way explanations, limiting user interaction and contextual adaptation. Social Explainable AI (sXAI) addresses this by enabling interactive, co-constructed explanations. This paper demonstrates how Contextual Importance and Utility (CIU) and Knowledge Graphs (KGs) can generate structured and context-aware explanations. We present a proof-of-concept implementation that shows how KGs facilitate dynamic dialogues, making sXAI practical. Our findings highlight the potential of CIU and KGs in creating more user-centered, interactive explainability frameworks.

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2026
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15936
Keywords
Contextual Importance and Utility, Knowledge Graph, Partner Model, Social Explainable AI
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-246079 (URN)10.1007/978-3-032-01399-6_4 (DOI)2-s2.0-105020008823 (Scopus ID)978-3-032-01398-9 (ISBN)978-3-032-01399-6 (ISBN)
Conference
7th International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2025, Detroit, MI, USA, May 19–20, 2025.
Available from: 2025-11-24 Created: 2025-11-24 Last updated: 2025-11-24Bibliographically approved
Canha, D., Kubler, S., Främling, K. & Fagherazzi, G. (2025). A functionally-grounded benchmark framework for XAI methods: insights and foundations from a systematic literature review. ACM Computing Surveys, 57(12), Article ID 320.
Open this publication in new window or tab >>A functionally-grounded benchmark framework for XAI methods: insights and foundations from a systematic literature review
2025 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 57, no 12, article id 320Article in journal (Refereed) Published
Abstract [en]

Artificial Intelligence (AI) is transforming industries, offering new opportunities to manage and enhance innovation. However, these advancements bring significant challenges for scientists and businesses, with one of the most critical being the ‘trustworthiness” of AI systems. A key requirement of trustworthiness is transparency, closely linked to explicability. Consequently, the exponential growth of eXplainable AI (XAI) has led to the development of numerous methods and metrics for explainability. Nevertheless, this has resulted in a lack of standardized and formal definitions for fundamental XAI properties (e.g., what do soundness, completeness, and faithfulness of an explanation entail? How is the stability of an XAI method defined?). This lack of consensus makes it difficult for XAI practitioners to establish a shared foundation, thereby impeding the effective benchmarking of XAI methods. This survey article addresses these challenges with two primary objectives. First, it systematically reviews and categorizes XAI properties, distinguishing them between human-centered (relying on empirical studies involving explainees) or functionally-grounded (quantitative metrics independent of explainees). Second, it expands this analysis by introducing a hierarchically structured, functionally grounded benchmark framework for XAI methods, providing formal definitions of XAI properties. The framework’s practicality is demonstrated by applying it to two widely used methods: LIME and SHAP.

Place, publisher, year, edition, pages
ACM Digital Library, 2025
Keywords
Artificial intelligence, eXplainable AI (XAI), interpretability, machine learning, responsible AI, transparency, trustworthiness
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-242441 (URN)10.1145/3737445 (DOI)2-s2.0-105011390896 (Scopus ID)
Available from: 2025-07-31 Created: 2025-07-31 Last updated: 2025-07-31Bibliographically approved
Malhi, A., Matekenya, J. P. & Främling, K. (2025). Generating explanations for molecular property predictions in graph neural networks. In: Yazan Mualla; Liuwen Yu; Davide Liga; Igor Tchappi; Réka Markovich (Ed.), Advances in explainability, agents, and large language models: first international workshop on causality, agents and large models, CALM 2024, Kyoto, Japan, November 18–19, 2024, proceedings. Paper presented at 1st International Workshop on Causality, Agents and Large Models, CALM 2024, Kyoto, Japan, November 18-19, 2024 (pp. 20-32). Cham: Springer
Open this publication in new window or tab >>Generating explanations for molecular property predictions in graph neural networks
2025 (English)In: Advances in explainability, agents, and large language models: first international workshop on causality, agents and large models, CALM 2024, Kyoto, Japan, November 18–19, 2024, proceedings / [ed] Yazan Mualla; Liuwen Yu; Davide Liga; Igor Tchappi; Réka Markovich, Cham: Springer, 2025, p. 20-32Conference paper, Published paper (Refereed)
Abstract [en]

Graph neural networks have helped researchers overcome the challenges of deep learning on graphs in non-Euclidean space. Like most deep learning algorithms, although the prediction of the models produces good results, explaining the predictions of the model is often challenging. This paper will focus on applying graph neural networks to predict the properties of the various molecules in the molecular datasets. The aim is to explore the generation of explanations for molecule property predictions. The four graph neural networks and seven explainers are chosen to generate and compare the quality of the explanations that are given by the explainers for each of the model predictions. The quality of this explanation is measured by sparsity, fidelity, and fidelity inverse. It is observed that various models find it difficult to learn the node embeddings when there is a class imbalance; despite the models achieving a 75% accuracy and the F1_Score was 66%. It is also observed that for all datasets, sparsity had a statistically significant effect on fidelity; that is, as more important features are masked, the quality of the explanation reduces. The effect of sparsity on fidelity inverse varied from dataset to dataset; as more unimportant features were masked, the quality of the explanations improved in some datasets, yet the change was not significant in other datasets. Finally, it was observed that the explanation quality differs across models. However, larger neural networks produced better predictions in our experiments, and the quality of the explanation of those predictions was not lower than that of smaller neural networks.

Place, publisher, year, edition, pages
Cham: Springer, 2025
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2471
Keywords
Explainability, Graph Neural networks, Molecular properties
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-238729 (URN)10.1007/978-3-031-89103-8_2 (DOI)2-s2.0-105004255003 (Scopus ID)978-3-031-89102-1 (ISBN)978-3-031-89103-8 (ISBN)
Conference
1st International Workshop on Causality, Agents and Large Models, CALM 2024, Kyoto, Japan, November 18-19, 2024
Available from: 2025-05-13 Created: 2025-05-13 Last updated: 2025-05-13Bibliographically approved
Främling, K. (2025). R implementation of contextual importance and utility for explainable AI.
Open this publication in new window or tab >>R implementation of contextual importance and utility for explainable AI
2025 (English)Other (Other (popular science, discussion, etc.))
Keywords
Explainable Artificial Intelligence, XAI, Contextual Importance, Contextual Utility
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-240461 (URN)
Available from: 2025-06-17 Created: 2025-06-17 Last updated: 2025-06-17Bibliographically approved
Grund Pihlgren, G. & Främling, K. (2025). Segmentation and smoothing affect explanation quality more than the choice of perturbation-based XAI method for image explanations. In: 2025 International Joint Conference on Neural Networks (IJCNN): . Paper presented at 2025 International Joint Conference on Neural Networks (IJCNN), Rome, Italy, June 30 - July 5, 2025 (pp. 1-8). IEEE
Open this publication in new window or tab >>Segmentation and smoothing affect explanation quality more than the choice of perturbation-based XAI method for image explanations
2025 (English)In: 2025 International Joint Conference on Neural Networks (IJCNN), IEEE, 2025, p. 1-8Conference paper, Published paper (Refereed)
Abstract [en]

Perturbation-based post-hoc image explanation methods are commonly used to explain image prediction models. These methods perturb parts of the input to measure how those parts affect the output. Since the methods only require the input and output, they can be applied to any model, making them a popular choice to explain black-box models. While many different methods exist and have been compared with one another, it remains poorly understood which parameters of the different methods are responsible for their varying performance.

This work uses the Randomized Input Sampling for Explanations (RISE) method as a baseline to evaluate many combinations of mask sampling, segmentation techniques, smoothing, attribution calculation, and per-segment or per-pixel attribution, using a proxy metric. The results show that attribution calculation, which is frequently the focus of other works, has little impact on the results. Conversely, segmentation and per-pixel attribution, rarely examined parameters, have a significant impact.

Place, publisher, year, edition, pages
IEEE, 2025
Series
Proceedings of International Joint Conference on Neural Networks, ISSN 2161-4393, E-ISSN 2161-4407
Keywords
explainable AI, image explanations, post-hoc explanations, image segmentation, saliency maps, perturbation-based xai
National Category
Computer graphics and computer vision Artificial Intelligence
Identifiers
urn:nbn:se:umu:diva-246444 (URN)10.1109/IJCNN64981.2025.11228842 (DOI)2-s2.0-105023967488 (Scopus ID)979-8-3315-1042-8 (ISBN)979-8-3315-1043-5 (ISBN)
Conference
2025 International Joint Conference on Neural Networks (IJCNN), Rome, Italy, June 30 - July 5, 2025
Funder
Swedish Research CouncilKnut and Alice Wallenberg Foundation
Available from: 2025-11-17 Created: 2025-11-17 Last updated: 2025-12-15Bibliographically approved
Främling, K. (2024). Contextual importance and utility in python: new functionality and insights with the py-ciu package. In: XAI 2024 Workshop of 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju, South Corea: . Paper presented at XAI 2024 Workshop of 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), 15 August, 2024..
Open this publication in new window or tab >>Contextual importance and utility in python: new functionality and insights with the py-ciu package
2024 (English)In: XAI 2024 Workshop of 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju, South Corea, 2024Conference paper, Published paper (Refereed)
Abstract [en]

The availability of easy-to-use and reliable software implementations is important for allowing researchers in academia and industry to test, assess and take into use eXplainable AI (XAI) methods. This paper describes the py-ciu Python implementation of the Contextual Importance and Utility (CIU) model-agnostic, post-hoc explanation method and illustrates capabilities of CIU that go beyond the current state-of-the-art that could be useful for XAI practitioners in general.

National Category
Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-228992 (URN)
Conference
XAI 2024 Workshop of 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), 15 August, 2024.
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011220
Available from: 2024-08-30 Created: 2024-08-30 Last updated: 2024-09-04Bibliographically approved
Patil, M. & Främling, K. (2024). Enhancing vulnerable class robustness in adversarial machine learning. In: Sansanee Auephanwiriyakul; Yi Mei; Toshihisa Tanaka (Ed.), IJCNN 2024: conference proceedings. Paper presented at International Joint Conference on Neural Networks (IJCNN), 30-juni till den 5 juli 2024, Yokohama, JAPAN. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Enhancing vulnerable class robustness in adversarial machine learning
2024 (English)In: IJCNN 2024: conference proceedings / [ed] Sansanee Auephanwiriyakul; Yi Mei; Toshihisa Tanaka, Institute of Electrical and Electronics Engineers (IEEE), 2024Conference paper, Published paper (Refereed)
Abstract [en]

Recent advancements in machine learning, particularly in the field of adversarial training, have underscored the necessity for models that are not only accurate but also robust against adversarial attacks. However, a critical challenge lies in the unequal vulnerability of different classes within these models, often leading to a 'weakest-link' phenomenon where certain classes are more susceptible to adversarial perturbations. This paper presents a novel approach to improving the robustness of machine learning models against adversarial attacks, focusing on classes identified as vulnerable. The core of our methodology involves generating adversarial examples specifically tailored to these vulnerable classes, a process we term class-specific perturbations. We integrate this adversarial training into the learning process, aiming to enhance the model's robustness specifically for these at-risk classes. Motivated by this finding, we propose Balanced Adversarial Training (BAT) to facilitate adversarial training for the vulnerable class. Experimental evaluations across different datasets show that our proposed framework not only improves the robustness of the vulnerable class against adversarial attacks but also maintains overall model performance. Our work highlights the importance of moving beyond average accuracy, which is particularly important in safety-critical applications.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Series
IEEE International Joint Conference on Neural Networks (IJCNN), ISSN 2161-4393
Keywords
Adversarial Training, Adversarial Perturbation, Robust Optimization
National Category
Other Computer and Information Science Computer Sciences
Identifiers
urn:nbn:se:umu:diva-242761 (URN)10.1109/IJCNN60899.2024.10650931 (DOI)001392668201099 ()2-s2.0-85204947786 (Scopus ID)979-8-3503-5932-9 (ISBN)979-8-3503-5931-2 (ISBN)
Conference
International Joint Conference on Neural Networks (IJCNN), 30-juni till den 5 juli 2024, Yokohama, JAPAN
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2025-08-08 Created: 2025-08-08 Last updated: 2025-08-08Bibliographically approved
Calvaresi, D., Najjar, A., Omicini, A., Aydogan, R., Carli, R., Ciatto, G., . . . Främling, K. (Eds.). (2024). Explainable and Transparent AI and Multi-Agent Systems: 6th International Workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, Revised Selected Papers (1ed.). Paper presented at EXTRAAMAS 2024, 6th International Workshop, Auckland, New Zealand, May 6–10, 2024. Springer
Open this publication in new window or tab >>Explainable and Transparent AI and Multi-Agent Systems: 6th International Workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, Revised Selected Papers
Show others...
2024 (English)Conference proceedings (editor) (Refereed)
Abstract [en]

 This volume constitutes the papers of several workshops which were held in conjunction with the 6th International Workshop on Explainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2024, in Auckland, New Zealand, during May 6–10, 2024.

The 13 full papers presented in this book were carefully reviewed and selected from 25 submissions. The papers are organized in the following topical sections: User-centric XAI; XAI and Reinforcement Learning; Neuro-symbolic AI and Explainable Machine Learning; and XAI & Ethics.

Place, publisher, year, edition, pages
Springer, 2024. p. 243 Edition: 1
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14847
Keywords
Computer Science, Informatics, multi-agent systems, artificial intelligence, knowledge representation and reasoning
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-230476 (URN)10.1007/978-3-031-70074-3 (DOI)978-3-031-70073-6 (ISBN)978-3-031-70074-3 (ISBN)
Conference
EXTRAAMAS 2024, 6th International Workshop, Auckland, New Zealand, May 6–10, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011220
Note

Part of the book sub series: Lecture Notes in Artificial Intelligence (LNAI).

Included in the following conference series: EXTRAAMAS: International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems.

Available from: 2024-10-02 Created: 2024-10-02 Last updated: 2024-10-03Bibliographically approved
Huotari, M., Malhi, A. & Främling, K. (2024). Machine learning applications for smart building energy utilization: a survey. Archives of Computational Methods in Engineering, 31(5), 2537-2556
Open this publication in new window or tab >>Machine learning applications for smart building energy utilization: a survey
2024 (English)In: Archives of Computational Methods in Engineering, ISSN 1134-3060, E-ISSN 1886-1784, Vol. 31, no 5, p. 2537-2556Article in journal (Refereed) Published
Abstract [en]

The United Nations launched sustainable development goals in 2015 that include goals for sustainable energy. From global energy consumption, households consume 20–30% of energy in Europe, North America and Asia; furthermore, the overall global energy consumption has steadily increased in the recent decades. Consequently, to meet the increased energy demand and to promote efficient energy consumption, there is a persistent need to develop applications enhancing utilization of energy in buildings. However, despite the potential significance of AI in this area, few surveys have systematically categorized these applications. Therefore, this paper presents a systematic review of the literature, and then creates a novel taxonomy for applications of smart building energy utilization. The contributions of this paper are (a) a systematic review of applications and machine learning methods for smart building energy utilization, (b) a novel taxonomy for the applications, (c) detailed analysis of these solutions and techniques used for the applications (electric grid, smart building energy management and control, maintenance and security, and personalization), and, finally, (d) a discussion on open issues and developments in the field.

Place, publisher, year, edition, pages
Springer Nature, 2024
National Category
Other Computer and Information Science
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-220506 (URN)10.1007/s11831-023-10054-7 (DOI)001156366200002 ()2-s2.0-85184197546 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011220
Available from: 2024-02-05 Created: 2024-02-05 Last updated: 2024-08-20Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8078-5172

Search in DiVA

Show all publications