Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 77) Show all publications
Malhi, A., Matekenya, J. P. & Främling, K. (2025). Generating explanations for molecular property predictions in graph neural networks. In: Yazan Mualla; Liuwen Yu; Davide Liga; Igor Tchappi; Réka Markovich (Ed.), Advances in explainability, agents, and large language models: first international workshop on causality, agents and large models, CALM 2024, Kyoto, Japan, November 18–19, 2024, proceedings. Paper presented at 1st International Workshop on Causality, Agents and Large Models, CALM 2024, Kyoto, Japan, November 18-19, 2024 (pp. 20-32). Cham: Springer
Open this publication in new window or tab >>Generating explanations for molecular property predictions in graph neural networks
2025 (English)In: Advances in explainability, agents, and large language models: first international workshop on causality, agents and large models, CALM 2024, Kyoto, Japan, November 18–19, 2024, proceedings / [ed] Yazan Mualla; Liuwen Yu; Davide Liga; Igor Tchappi; Réka Markovich, Cham: Springer, 2025, p. 20-32Conference paper, Published paper (Refereed)
Abstract [en]

Graph neural networks have helped researchers overcome the challenges of deep learning on graphs in non-Euclidean space. Like most deep learning algorithms, although the prediction of the models produces good results, explaining the predictions of the model is often challenging. This paper will focus on applying graph neural networks to predict the properties of the various molecules in the molecular datasets. The aim is to explore the generation of explanations for molecule property predictions. The four graph neural networks and seven explainers are chosen to generate and compare the quality of the explanations that are given by the explainers for each of the model predictions. The quality of this explanation is measured by sparsity, fidelity, and fidelity inverse. It is observed that various models find it difficult to learn the node embeddings when there is a class imbalance; despite the models achieving a 75% accuracy and the F1_Score was 66%. It is also observed that for all datasets, sparsity had a statistically significant effect on fidelity; that is, as more important features are masked, the quality of the explanation reduces. The effect of sparsity on fidelity inverse varied from dataset to dataset; as more unimportant features were masked, the quality of the explanations improved in some datasets, yet the change was not significant in other datasets. Finally, it was observed that the explanation quality differs across models. However, larger neural networks produced better predictions in our experiments, and the quality of the explanation of those predictions was not lower than that of smaller neural networks.

Place, publisher, year, edition, pages
Cham: Springer, 2025
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 2471
Keywords
Explainability, Graph Neural networks, Molecular properties
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-238729 (URN)10.1007/978-3-031-89103-8_2 (DOI)2-s2.0-105004255003 (Scopus ID)978-3-031-89102-1 (ISBN)978-3-031-89103-8 (ISBN)
Conference
1st International Workshop on Causality, Agents and Large Models, CALM 2024, Kyoto, Japan, November 18-19, 2024
Available from: 2025-05-13 Created: 2025-05-13 Last updated: 2025-05-13Bibliographically approved
Främling, K. (2024). Contextual importance and utility in python: new functionality and insights with the py-ciu package. In: XAI 2024 Workshop of 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju, South Corea: . Paper presented at XAI 2024 Workshop of 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), 15 August, 2024..
Open this publication in new window or tab >>Contextual importance and utility in python: new functionality and insights with the py-ciu package
2024 (English)In: XAI 2024 Workshop of 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju, South Corea, 2024Conference paper, Published paper (Refereed)
Abstract [en]

The availability of easy-to-use and reliable software implementations is important for allowing researchers in academia and industry to test, assess and take into use eXplainable AI (XAI) methods. This paper describes the py-ciu Python implementation of the Contextual Importance and Utility (CIU) model-agnostic, post-hoc explanation method and illustrates capabilities of CIU that go beyond the current state-of-the-art that could be useful for XAI practitioners in general.

National Category
Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-228992 (URN)
Conference
XAI 2024 Workshop of 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), 15 August, 2024.
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011220
Available from: 2024-08-30 Created: 2024-08-30 Last updated: 2024-09-04Bibliographically approved
Calvaresi, D., Najjar, A., Omicini, A., Aydogan, R., Carli, R., Ciatto, G., . . . Främling, K. (Eds.). (2024). Explainable and Transparent AI and Multi-Agent Systems: 6th International Workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, Revised Selected Papers (1ed.). Paper presented at EXTRAAMAS 2024, 6th International Workshop, Auckland, New Zealand, May 6–10, 2024. Springer
Open this publication in new window or tab >>Explainable and Transparent AI and Multi-Agent Systems: 6th International Workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, Revised Selected Papers
Show others...
2024 (English)Conference proceedings (editor) (Refereed)
Abstract [en]

 This volume constitutes the papers of several workshops which were held in conjunction with the 6th International Workshop on Explainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2024, in Auckland, New Zealand, during May 6–10, 2024.

The 13 full papers presented in this book were carefully reviewed and selected from 25 submissions. The papers are organized in the following topical sections: User-centric XAI; XAI and Reinforcement Learning; Neuro-symbolic AI and Explainable Machine Learning; and XAI & Ethics.

Place, publisher, year, edition, pages
Springer, 2024. p. 243 Edition: 1
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14847
Keywords
Computer Science, Informatics, multi-agent systems, artificial intelligence, knowledge representation and reasoning
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-230476 (URN)10.1007/978-3-031-70074-3 (DOI)978-3-031-70073-6 (ISBN)978-3-031-70074-3 (ISBN)
Conference
EXTRAAMAS 2024, 6th International Workshop, Auckland, New Zealand, May 6–10, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011220
Note

Part of the book sub series: Lecture Notes in Artificial Intelligence (LNAI).

Included in the following conference series: EXTRAAMAS: International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems.

Available from: 2024-10-02 Created: 2024-10-02 Last updated: 2024-10-03Bibliographically approved
Huotari, M., Malhi, A. & Främling, K. (2024). Machine learning applications for smart building energy utilization: a survey. Archives of Computational Methods in Engineering, 31(5), 2537-2556
Open this publication in new window or tab >>Machine learning applications for smart building energy utilization: a survey
2024 (English)In: Archives of Computational Methods in Engineering, ISSN 1134-3060, E-ISSN 1886-1784, Vol. 31, no 5, p. 2537-2556Article in journal (Refereed) Published
Abstract [en]

The United Nations launched sustainable development goals in 2015 that include goals for sustainable energy. From global energy consumption, households consume 20–30% of energy in Europe, North America and Asia; furthermore, the overall global energy consumption has steadily increased in the recent decades. Consequently, to meet the increased energy demand and to promote efficient energy consumption, there is a persistent need to develop applications enhancing utilization of energy in buildings. However, despite the potential significance of AI in this area, few surveys have systematically categorized these applications. Therefore, this paper presents a systematic review of the literature, and then creates a novel taxonomy for applications of smart building energy utilization. The contributions of this paper are (a) a systematic review of applications and machine learning methods for smart building energy utilization, (b) a novel taxonomy for the applications, (c) detailed analysis of these solutions and techniques used for the applications (electric grid, smart building energy management and control, maintenance and security, and personalization), and, finally, (d) a discussion on open issues and developments in the field.

Place, publisher, year, edition, pages
Springer Nature, 2024
National Category
Other Computer and Information Science
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-220506 (URN)10.1007/s11831-023-10054-7 (DOI)001156366200002 ()2-s2.0-85184197546 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011220
Available from: 2024-02-05 Created: 2024-02-05 Last updated: 2024-08-20Bibliographically approved
Calvaresi, D., Najjar, A., Omicini, A. & Främling, K. (2024). Preface. In: Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Joris Hulstijn; Kary Främling (Ed.), Explainable and transparent AI and multi-agent systems: 6th international workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, revised selected papers (pp. v-v). Cham: Springer
Open this publication in new window or tab >>Preface
2024 (English)In: Explainable and transparent AI and multi-agent systems: 6th international workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, revised selected papers / [ed] Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Joris Hulstijn; Kary Främling, Cham: Springer, 2024, p. v-vChapter in book (Other academic)
Place, publisher, year, edition, pages
Cham: Springer, 2024
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 14847
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:umu:diva-230966 (URN)2-s2.0-85206104226 (Scopus ID)9783031700736 (ISBN)9783031700743 (ISBN)
Note

Part of the book sub-series: Lecture Notes in Artificial Intelligence (LNAI)

Available from: 2024-10-21 Created: 2024-10-21 Last updated: 2024-10-21Bibliographically approved
Främling, K., Apopei, I.-V., Grund Pihlgren, G. & Malhi, A. (2024). py_ciu_image: a python library for explaining image classification with contextual importance and utility. In: Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Joris Hulstijn; Kary Främling (Ed.), Explainable and transparent AI and multi-agent systems: 6th international workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, revised selected papers. Paper presented at 6th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems, EXTRAAMAS 2024, Auckland, New Zealand, May 6-10, 2024 (pp. 184-188). Cham: Springer
Open this publication in new window or tab >>py_ciu_image: a python library for explaining image classification with contextual importance and utility
2024 (English)In: Explainable and transparent AI and multi-agent systems: 6th international workshop, EXTRAAMAS 2024, Auckland, New Zealand, May 6–10, 2024, revised selected papers / [ed] Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Joris Hulstijn; Kary Främling, Cham: Springer, 2024, p. 184-188Conference paper, Published paper (Refereed)
Abstract [en]

Contextual Importance and Utility (CIU) is a model-agnostic method for explaining outcomes of AI systems. CIU has succeeded in producing meaningful explanations where state-of-the-art methods fail, e.g. for detecting bleeding in gastroenterological images. This paper presents a Python implementation of CIU for explaining image classifications.

Place, publisher, year, edition, pages
Cham: Springer, 2024
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 14847
Keywords
Contextual Importance and Utility, Deep Neural Network, Explainable Artificial Intelligence, Image Classification
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-230968 (URN)10.1007/978-3-031-70074-3_10 (DOI)001336429000010 ()2-s2.0-85206108952 (Scopus ID)9783031700736 (ISBN)9783031700743 (ISBN)
Conference
6th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems, EXTRAAMAS 2024, Auckland, New Zealand, May 6-10, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Note

Part of the book sub-series: Lecture Notes in Artificial Intelligence (LNAI)

Available from: 2024-10-21 Created: 2024-10-21 Last updated: 2025-04-24Bibliographically approved
Yousefnezhad, N., Malhi, A., Keyriläinen, T. & Främling, K. (2023). A comprehensive security architecture for information management throughout the lifecycle of IoT products. Sensors, 23(6), Article ID 3236.
Open this publication in new window or tab >>A comprehensive security architecture for information management throughout the lifecycle of IoT products
2023 (English)In: Sensors, E-ISSN 1424-8220, Vol. 23, no 6, article id 3236Article in journal (Refereed) Published
Abstract [en]

The Internet of things (IoT) is expected to have an impact on business and the world at large in a way comparable to the Internet itself. An IoT product is a physical product with an associated virtual counterpart connected to the internet with computational as well as communication capabilities. The possibility to collect information from internet-connected products and sensors gives unprecedented possibilities to improve and optimize product use and maintenance. Virtual counterpart and digital twin (DT) concepts have been proposed as a solution for providing the necessary information management throughout the whole product lifecycle, which we here call product lifecycle information management (PLIM). Security in these systems is imperative due to the multiple ways in which opponents can attack the system during the whole lifecycle of an IoT product. To address this need, the current study proposes a security architecture for the IoT, taking into particular consideration the requirements of PLIM. The security architecture has been designed for the Open Messaging Interface (O-MI) and Open Data Format (O-DF) standards for the IoT and product lifecycle management (PLM) but it is also applicable to other IoT and PLIM architectures. The proposed security architecture is capable of hindering unauthorized access to information and restricts access levels based on user roles and permissions. Based on our findings, the proposed security architecture is the first security model for PLIM to integrate and coordinate the IoT ecosystem, by dividing the security approaches into two domains: user client and product domain. The security architecture has been deployed in smart city use cases in three different European cities, Helsinki, Lyon, and Brussels, to validate the security metrics in the proposed approach. Our analysis shows that the proposed security architecture can easily integrate the security requirements of both clients and products providing solutions for them as demonstrated in the implemented use cases.

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
Internet of things (IoT), information management, security architecture, product lifecycle information management (PLIM), identity and access management (IAM)
National Category
Information Systems
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-205821 (URN)10.3390/s23063236 (DOI)000959436000001 ()36991946 (PubMedID)2-s2.0-85151184689 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011220EU, Horizon 2020, 856602
Available from: 2023-03-20 Created: 2023-03-20 Last updated: 2023-09-05Bibliographically approved
Malhi, A. & Främling, K. (2023). An evaluation of contextual importance and utility for outcome explanation of black-box predictions for medical datasets. In: Luca Longo (Ed.), Explainable artificial intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part I. Paper presented at 1st World Conference on Explainable Artificial Intelligence, xAI 2023, Lisbon, Portugal, July 26-28, 2023 (pp. 544-557). Springer Nature
Open this publication in new window or tab >>An evaluation of contextual importance and utility for outcome explanation of black-box predictions for medical datasets
2023 (English)In: Explainable artificial intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part I / [ed] Luca Longo, Springer Nature, 2023, p. 544-557Conference paper, Published paper (Refereed)
Abstract [en]

Contextual Importance and Utility (CIU) is a model-agnostic method for producing situation- or instance-specific explanations of the outcome of so-called black-box systems. A major difference between CIU and other outcome explanation methods (also called post-hoc methods) is that CIU produces explanations without producing any intermediate interpretable model. CIU’s notion of importance is similar as in Decision Theory but differs from how importance is defined for other outcome explanation methods. Utility is also a well-known concept from Decision Theory that is largely ignored in current Explainable AI research. CIU is here validated by providing explanations for the two popular medical data sets - heart disease and breast cancer in order to show the applicability of CIU explanations on medical predictions and with different black-box models. The explanations are compared with corresponding ones produced by the Local Interpretable Model-agnostic Explanations (LIME) method [17], which is currently one of the most used post-hoc explanation methods. The paper’s main contribution is to provide new CIU results and insights on several benchmark data sets and showing in what way CIU differs from LIME-based explanations.

Place, publisher, year, edition, pages
Springer Nature, 2023
Series
Communications in Computer and Information Science book series (CCIS), ISSN 1865-0929, E-ISSN 1865-0937 ; 1901
Keywords
Explainable AI, Contextual Importance, Contextual Utility, Multiple Criteria Decision Making, Heart disease, Breast cancer data
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-217312 (URN)10.1007/978-3-031-44064-9_29 (DOI)001286482300029 ()2-s2.0-85176960967 (Scopus ID)978-3-031-44063-2 (ISBN)978-3-031-44064-9 (ISBN)
Conference
1st World Conference on Explainable Artificial Intelligence, xAI 2023, Lisbon, Portugal, July 26-28, 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011220
Available from: 2023-11-29 Created: 2023-11-29 Last updated: 2025-04-24Bibliographically approved
Främling, K. (2023). Counterfactual, contrastive, and hierarchical explanations with contextual importance and utility. In: Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Yazan Mualla; Kary Främling (Ed.), Calvaresi, D., et al. (Ed.), Explainable and transparent ai and multi-agent systems: 5th international workshop, EXTRAAMAS 2023, London, UK, May 29, 2023, revised selected papers. Paper presented at AAMAS 2023 (pp. 180-184). Paper presented at AAMAS 2023. Springer Nature, 14127
Open this publication in new window or tab >>Counterfactual, contrastive, and hierarchical explanations with contextual importance and utility
2023 (English)In: Explainable and transparent ai and multi-agent systems: 5th international workshop, EXTRAAMAS 2023, London, UK, May 29, 2023, revised selected papers / [ed] Davide Calvaresi; Amro Najjar; Andrea Omicini; Reyhan Aydogan; Rachele Carli; Giovanni Ciatto; Yazan Mualla; Kary Främling, Springer Nature, 2023, Vol. 14127, p. 180-184Chapter in book (Refereed)
Abstract [en]

Contextual Importance and Utility (CIU) is a model-agnostic method for post-hoc explanation of prediction outcomes. In this paper we describe and show new functionality in the R implementation of CIU for tabular data. Much of that functionality is specific to CIU and goes beyond the current state of the art.

Place, publisher, year, edition, pages
Springer Nature, 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14127
Keywords
Contextual Importance and Utility, Explainable AI, Open source, Counterfactual, Contrastive
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-215057 (URN)10.1007/978-3-031-40878-6_16 (DOI)2-s2.0-85172205293 (Scopus ID)978-3-031-40877-9 (ISBN)978-3-031-40878-6 (ISBN)
Conference
AAMAS 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP), 570011220
Note

Part of conference: 5th International Workshop, EXTRAAMAS 2023, London, UK, May 29, 2023.

Available from: 2023-10-06 Created: 2023-10-06 Last updated: 2023-10-09Bibliographically approved
Patil, M. & Främling, K. (2023). Do intermediate feature coalitions aid explainability of black-box models?. In: Luca Longo (Ed.), Explainable Artificial Intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part I. Paper presented at xAI 2023: Explainable Artificial Intelligence, Lisbon, Portugal, July 26-28, 2023 (pp. 115-130). Cham: Springer
Open this publication in new window or tab >>Do intermediate feature coalitions aid explainability of black-box models?
2023 (English)In: Explainable Artificial Intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part I / [ed] Luca Longo, Cham: Springer, 2023, p. 115-130Conference paper, Published paper (Refereed)
Abstract [en]

This work introduces the notion of intermediate concepts based on levels structure to aid explainability for black-box models. The levels structure is a hierarchical structure in which each level corresponds to features of a dataset (i.e., a player-set partition). The level of coarseness increases from the trivial set, which only comprises singletons, to the set, which only contains the grand coalition. In addition, it is possible to establish meronomies, i.e., part-whole relationships, via a domain expert that can be utilised to generate explanations at an abstract level. We illustrate the usability of this approach in a real-world car model example and the Titanic dataset, where intermediate concepts aid in explainability at different levels of abstraction.

Place, publisher, year, edition, pages
Cham: Springer, 2023
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 1901
Keywords
Coalition Formation, Explainability, Trust in Human-Agent Systems
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-216079 (URN)10.1007/978-3-031-44064-9_7 (DOI)001286482300007 ()2-s2.0-85176954534 (Scopus ID)9783031440632 (ISBN)9783031440649 (ISBN)
Conference
xAI 2023: Explainable Artificial Intelligence, Lisbon, Portugal, July 26-28, 2023
Funder
Knut and Alice Wallenberg Foundation, 570011440
Available from: 2023-11-01 Created: 2023-11-01 Last updated: 2025-04-24Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-8078-5172

Search in DiVA

Show all publications