Open this publication in new window or tab >>2022 (English)In: Technologies, E-ISSN 2227-7080, Vol. 10, no 6, article id 125Article in journal (Refereed) Published
Abstract [en]
There is an increasing need to provide explainability for machine learning models. There are different alternatives to provide explainability, for example, local and global methods. One of the approaches is based on Shapley values. Privacy is another critical requirement when dealing with sensitive data. Data-driven machine learning models may lead to disclosure. Data privacy provides several methods for ensuring privacy. In this paper, we study how methods for explainability based on Shapley values are affected by privacy methods. We show that some degree of protection still permits to maintain the information of Shapley values for the four machine learning models studied. Experiments seem to indicate that among the four models, Shapley values of linear models are the most affected ones.
Place, publisher, year, edition, pages
MDPI, 2022
Keywords
anonymization, data protection, explainability, machine learning, masking, Shapley values
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-204995 (URN)10.3390/technologies10060125 (DOI)000902763900001 ()2-s2.0-85147769287 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
2023-03-032023-03-032023-03-03Bibliographically approved