Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Efficient federated unlearning under plausible deniability
Umeå University, Faculty of Science and Technology, Department of Computing Science. (NAUSICA)ORCID iD: 0000-0002-8073-6784
Umeå University, Faculty of Science and Technology, Department of Computing Science. (NAUSICA)ORCID iD: 0000-0002-0368-8037
2025 (English)In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 114, no 1, article id 25Article in journal (Refereed) Published
Abstract [en]

Privacy regulations like the GDPR in Europe and the CCPA in the US allow users the right to remove their data from machine learning (ML) applications. Machine unlearning addresses this by modifying the ML parameters in order to forget the influence of a specific data point on its weights. Recent literature has highlighted that the contribution from datapoint(s) can be forged with some other data points in the dataset with probability close to one. This allows a server to falsely claim unlearning without actually modifying the model’s parameters. However, in distributed paradigms such as federated learning (FL), where the server lacks access to the dataset and the number of clients are limited, claiming unlearning in such cases becomes a challenge. An honest server must modify the model parameters in order to unlearn. This paper introduces an efficient way to achieve machine unlearning in FL, i.e., federated unlearning, by employing a privacy model which allows the FL server to plausibly deny the client’s participation in the training up to a certain extent. Specifically, we demonstrate that the server can generate a Proof-of-Deniability, where each aggregated update can be associated with at least x (the plausible deniability parameter) client updates. This enables the server to plausibly deny a client’s participation. However, in the event of frequent unlearning requests, the server is required to adopt an unlearning strategy and, accordingly, update its model parameters. We also perturb the client updates in a cluster in order to avoid inference from an honest but curious server. We show that the global model satisfies (𝜖, 𝛿)-differential privacy after T number of communication rounds. The proposed methodology has been evaluated on multiple datasets indifferent privacy settings. The experimental results show that our framework achieves comparable utility while providing a significant reduction in terms of memory (≈ 30 times), as well as retraining time (1.6-500769 times). The source code for the paper is available https://github.com/Ayush-Umu/Federated-Unlearning-under-Plausible-Deniability

Place, publisher, year, edition, pages
Springer Nature, 2025. Vol. 114, no 1, article id 25
Keywords [en]
Machine unlearning, Federated unlearning, FedAvg, Integral privacy, Plausible deniability, Differential privacy
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:umu:diva-234248DOI: 10.1007/s10994-024-06685-xISI: 001400054000004Scopus ID: 2-s2.0-85217772811OAI: oai:DiVA.org:umu-234248DiVA, id: diva2:1928947
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2025-01-18 Created: 2025-01-18 Last updated: 2025-02-25Bibliographically approved

Open Access in DiVA

fulltext(2346 kB)35 downloads
File information
File name FULLTEXT01.pdfFile size 2346 kBChecksum SHA-512
78ed0965d5c8984da8f42452c4a0e1b5af467875da7a3e26cd251775a8125719dec92124eed303e80d630eba24a58b618ef2df7025a4758e24e4488966113332
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Varshney, Ayush K.Torra, Vicenç

Search in DiVA

By author/editor
Varshney, Ayush K.Torra, Vicenç
By organisation
Department of Computing Science
In the same journal
Machine Learning
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 35 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 339 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf