Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
Umeå University, Faculty of Science and Technology, Department of Computing Science.
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0001-7119-7646
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-2633-6798
2023 (English)In: Artificial Intelligence Review, ISSN 0269-2821, E-ISSN 1573-7462, Vol. 56, p. 217-251Article in journal (Refereed) Published
Abstract [en]

Considering the growing prominence of production-level AI and the threat of adversarial attacks that can poison a machine learning model against a certain label, evade classification, or reveal sensitive data about the model and training data to an attacker, adversaries pose fundamental problems to machine learning systems. Furthermore, much research has focused on the inverse relationship between robustness and accuracy, raising problems for real-time and safety-critical systems particularly since they are governed by legal constraints in which software changes must be explainable and every change must be thoroughly tested. While many defenses have been proposed, they are often computationally expensive and tend to reduce model accuracy. We have therefore conducted a large survey of attacks and defenses and present a simple and practical framework for analyzing any machine-learning system from a safety-critical perspective using adversarial noise to find the upper bound of the failure rate. Using this method, we conclude that all tested configurations of the ResNet architecture fail to meet any reasonable definition of ‘safety-critical’ when tested on even small-scale benchmark data. We examine state of the art defenses and attacks against computer vision systems with a focus on safety-critical applications in autonomous driving, industrial control, and healthcare. By testing a combination of attacks and defenses, their efficacy, and their run-time requirements, we provide substantial empirical evidence that modern neural networks consistently fail to meet established safety-critical standards by a wide margin.

Place, publisher, year, edition, pages
Elsevier, 2023. Vol. 56, p. 217-251
Keywords [en]
Adversarial machine learning, Computer vision, Autonomous vehicles, Safety-critical
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-211212DOI: 10.1007/s10462-023-10521-4ISI: 001014695900002Scopus ID: 2-s2.0-85162639161OAI: oai:DiVA.org:umu-211212DiVA, id: diva2:1777455
Funder
Knut and Alice Wallenberg Foundation, 2019.0352Available from: 2023-06-29 Created: 2023-06-29 Last updated: 2024-01-08Bibliographically approved

Open Access in DiVA

fulltext(3751 kB)64 downloads
File information
File name FULLTEXT02.pdfFile size 3751 kBChecksum SHA-512
592aab3c3743e1adc210dd71dc2e6b02ed33c092918f8ee2b4e565a21b6baac3b50aece1c3e1a2bb9dae6fd2f0b10097cbf595e1a159ce6f809e234c46d90ae5
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Meyers, CharlesLöfstedt, TommyElmroth, Erik

Search in DiVA

By author/editor
Meyers, CharlesLöfstedt, TommyElmroth, Erik
By organisation
Department of Computing Science
In the same journal
Artificial Intelligence Review
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 137 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 457 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf