Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Poisoning attacks on federated learning for autonomous driving
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-7204-8228
AI Sweden; Royal Institute of Technology.
AI Sweden; Chalmers University of Technology.
AI Sweden; Royal Institute of Technology.
Show others and affiliations
2024 (English)In: 14th Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden / [ed] Florian Westphal, Einav Peretz-Andersson, Maria Riveiro, Kerstin Bach, and Fredrik Heintz, Linköping University Electronic Press, 2024, Vol. 208, p. 11-18, article id 2Conference paper, Published paper (Refereed)
Abstract [en]

Federated Learning (FL) is a decentralized learning paradigm, enabling parties to collaboratively train models while keeping their data confidential. Within autonomous driving, it brings the potential of reducing data storage costs, reducing bandwidth requirements, and to accelerate the learning. FL is, however, susceptible to poisoning attacks. In this paper, we introduce two novel poisoning attacks on FL tailored to regression tasks within autonomous driving: FLStealth and Off-Track Attack (OTA). FLStealth, an untargeted attack, aims at providing model updates that deteriorate the global model performance while appearing benign. OTA, on the other hand, is a targeted attack with the objective to change the global model's behavior when exposed to a certain trigger. We demonstrate the effectiveness of our attacks by conducting comprehensive experiments pertaining to the task of vehicle trajectory prediction. In particular, we show that, among five different untargeted attacks, FLStealth is the most successful at bypassing the considered defenses employed by the server.For OTA, we demonstrate the inability of common defense strategies to mitigate the attack, highlighting the critical need for new defensive mechanisms against targeted attacks within FL for autonomous driving.

Place, publisher, year, edition, pages
Linköping University Electronic Press, 2024. Vol. 208, p. 11-18, article id 2
Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 208
National Category
Engineering and Technology
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:umu:diva-226764DOI: 10.3384/ecp208002ISBN: 978-91-8075-709-6 (electronic)OAI: oai:DiVA.org:umu-226764DiVA, id: diva2:1874569
Conference
Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden
Available from: 2024-06-20 Created: 2024-06-20 Last updated: 2024-06-20Bibliographically approved

Open Access in DiVA

fulltext(700 kB)57 downloads
File information
File name FULLTEXT01.pdfFile size 700 kBChecksum SHA-512
cde50eeae45ef88dabee6eafe8c9bfff67c68c68ac10c45e527d1ab64bc31c55d4c07644b2a73b3b1fabf9fecff90104b0a0a509ca4e0cefb8e86e40f25ad88e
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records

Garg, Sonakshi

Search in DiVA

By author/editor
Garg, Sonakshi
By organisation
Department of Computing Science
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 57 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 592 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf