Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Explainable machine learning models with privacy
Umeå University, Faculty of Science and Technology, Department of Computing Science.
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-0368-8037
2024 (English)In: Progress in Artificial Intelligence, ISSN 2192-6352, E-ISSN 2192-6360, Vol. 13, p. 31-50Article in journal (Refereed) Published
Abstract [en]

The importance of explainable machine learning models is increasing because users want to understand the reasons behind decisions in data-driven models. Interpretability and explainability emerge from this need to design comprehensible systems. This paper focuses on privacy-preserving explainable machine learning. We study two data masking techniques: maximum distance to average vector (MDAV) and additive noise. The former is for achieving k-anonymity, and the second uses Laplacian noise to avoid record leakage and provide a level of differential privacy. We are interested in the process of developing data-driven models that, at the same time, make explainable decisions and are privacy-preserving. That is, we want to avoid the decision-making process leading to disclosure. To that end, we propose building models from anonymized data. More particularly, data that are k-anonymous or that have been anonymized add an appropriate level of noise to satisfy some differential privacy requirements. In this paper, we study how explainability has been affected by these data protection procedures. We use TreeSHAP as our technique for explainability. The experiments show that we can keep up to a certain degree both accuracy and explainability. So, our results show that some trade-off between privacy and explainability is possible for data protection using k-anonymity and noise addition.

Place, publisher, year, edition, pages
Springer, 2024. Vol. 13, p. 31-50
Keywords [en]
Data privacy, Explainability, eXplainable artificial intelligence, Irregularity, k-anonymity, Local differential privacy, Machine learning, Microaggregation, Noise addition
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-223264DOI: 10.1007/s13748-024-00315-2ISI: 001194608000001Scopus ID: 2-s2.0-85189563446OAI: oai:DiVA.org:umu-223264DiVA, id: diva2:1852296
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2024-04-17 Created: 2024-04-17 Last updated: 2024-06-25Bibliographically approved

Open Access in DiVA

fulltext(5034 kB)84 downloads
File information
File name FULLTEXT01.pdfFile size 5034 kBChecksum SHA-512
e44206722e0d74955def238c9f2ddcd792960b447bb984736d134701e1e157cd143d6b9932051d86c59495a876deaf4dc7359f4305e50063c1dc3bd44c33a56a
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Bozorgpanah, AsoTorra, Vicenç

Search in DiVA

By author/editor
Bozorgpanah, AsoTorra, Vicenç
By organisation
Department of Computing Science
In the same journal
Progress in Artificial Intelligence
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 84 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 281 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf