Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
Umeå University, Faculty of Science and Technology, Department of Computing Science.
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0001-7119-7646
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-2633-6798
2023 (English)In: Artificial Intelligence Review, ISSN 0269-2821, E-ISSN 1573-7462, Vol. 56, p. 217-251Article in journal (Refereed) Published
Abstract [en]

Considering the growing prominence of production-level AI and the threat of adversarial attacks that can poison a machine learning model against a certain label, evade classification, or reveal sensitive data about the model and training data to an attacker, adversaries pose fundamental problems to machine learning systems. Furthermore, much research has focused on the inverse relationship between robustness and accuracy, raising problems for real-time and safety-critical systems particularly since they are governed by legal constraints in which software changes must be explainable and every change must be thoroughly tested. While many defenses have been proposed, they are often computationally expensive and tend to reduce model accuracy. We have therefore conducted a large survey of attacks and defenses and present a simple and practical framework for analyzing any machine-learning system from a safety-critical perspective using adversarial noise to find the upper bound of the failure rate. Using this method, we conclude that all tested configurations of the ResNet architecture fail to meet any reasonable definition of ‘safety-critical’ when tested on even small-scale benchmark data. We examine state of the art defenses and attacks against computer vision systems with a focus on safety-critical applications in autonomous driving, industrial control, and healthcare. By testing a combination of attacks and defenses, their efficacy, and their run-time requirements, we provide substantial empirical evidence that modern neural networks consistently fail to meet established safety-critical standards by a wide margin.

Place, publisher, year, edition, pages
Elsevier, 2023. Vol. 56, p. 217-251
Keywords [en]
Adversarial machine learning, Computer vision, Autonomous vehicles, Safety-critical
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-211212DOI: 10.1007/s10462-023-10521-4ISI: 001014695900002Scopus ID: 2-s2.0-85162639161OAI: oai:DiVA.org:umu-211212DiVA, id: diva2:1777455
Funder
Knut and Alice Wallenberg Foundation, 2019.0352Available from: 2023-06-29 Created: 2023-06-29 Last updated: 2025-05-19Bibliographically approved
In thesis
1. Trustworthy machine learning
Open this publication in new window or tab >>Trustworthy machine learning
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Alternative title[sv]
Tillförlitlig maskininlärning
Abstract [sv]

Denna avhandling studerar robusthet, integritet och reproducerbarhet i säker-hetskritisk maskininlärning, med särskild tonvikt på datorseende, avvikelse-detektering och undvikande attacker.

Arbetet inleds med att analysera de praktiska kostnaderna och fördelarna med försvarsstrategier mot attacker, vilket visar att vanliga mått på robusthet är dåliga indikatorer på verklig prestanda i attacker (Artikel I). Genom storskaliga experiment visar arbetet vidare att exempel på attacker ofta kan genereras i linjär tid, vilket ger angripare en beräkningsfördel gentemot försvar-are (Artikel II). För att hantera detta presenterar avhandlingen ett nytt mått – Training Rate and Survival Heuristic (TRASH) – för att förutsäga modellfel under attack och underlätta tidigt avvisande av sårbara arkitekturer (Artikel III). Detta mått utvidgades sedan till verkliga kostnader, vilket visar att robusthet i attacker kan förbättras med hjälp av billig hårdvara med låg precision utan att offra noggrannheten (Artikel IV).

Utöver robusthet behandlar avhandlingen integritet genom att utforma en lättviktig klientbaserad modell för spamdetektering som bevarar användardata och står emot flera klasser av attacker utan att kräva att beräkningar görs på serversidan (Artikel V). Som svar på behovet av reproducerbara och gransk-ningsbara experiment i säkerhetskritiska sammanhang presenterar avhandlingen även “deckard”, ett deklarativt programvaruramverk för distribuerade och robusta maskininlärningsexperiment (Artikel VI).

Tillsammans erbjuder dessa bidrag empiriska tekniker för att utvärdera och förbättra modellers robusthet, föreslår en integritetsbevarande klassificeringsstrategi och levererar praktiska verktyg för reproducerbara experiment. Sammantaget främjar avhandlingen målet att bygga maskininlärningssystem som inte bara är korrekta, utan också robusta, reproducerbara och pålitliga.

Abstract [en]

This thesis studies adversarial robustness, privacy, and reproducibility in safety critical machine learning systems, with particular emphasis on computer vision, anomaly detection, and evasion attacks through a series of papers. The work begins by analysing the practical costs and benefits of defence strategies against adversarial attacks, revealing that common robustness metrics are poor indicators of real-world adversarial performance (Paper I). Through large-scale experiments, it further demonstrates that adversarial examples can often be generated in linear time, granting attackers a computational advantage over defenders (Paper II). To address this, a novel metric—the Training Rate and Survival Heuristic (TRASH)—was developed to predict model failure under attack and facilitate early rejection of vulnerable architectures (Paper III). This metric was then extended to real-world cost, showing that adversarial robustness can be improved using low-cost, low-precision hardware without sacrificing accuracy (Paper IV). Beyond robustness, the thesis tackles privacy by designing a lightweight, client-side spam detection model that preserves user data and resists several classes of attacks without requiring server-side computation (Paper V). Recognizing the need for reproducible and auditable experiments in safety-critical contexts, the thesis also presents deckard, a declarative software frameworkfor distributed and robust machine learning experimentation (Paper VI). Together, these contributions offer empirical techniques for evaluating and improving model robustness, propose a privacy-preserving classification strategy, and deliver practical tooling for reproducible experimentation. Ultimately, this thesis advances the goal of building machine learning systems that are not only accurate, but also robust, reproducible, and trustworthy.

Place, publisher, year, edition, pages
Umeå, Sweden: Umeå University, 2025. p. 66
Series
Report / UMINF, ISSN 0348-0542 ; 25.10
Keywords
Machine Learning, Adversarial Machine Learning, Anomaly Detection, Computer Vision, Robustness, Artificial Intelligence, Trustworthy Machine Learning, Adversariell maskininlärning, anomalidetektering, artificiell intelligens, datorseende, maskininlärning, robusthet, tillförlitlig maskininlärning
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-238928 (URN)978-91-8070-722-0 (ISBN)978-91-8070-723-7 (ISBN)
Public defence
2025-06-11, UB.A.230 - Lindellhallen 3, Universitetstorget 4, Umeå, Sweden, 13:00 (English)
Opponent
Supervisors
Funder
Knut and Alice Wallenberg Foundation, 2019.035
Available from: 2025-05-21 Created: 2025-05-16 Last updated: 2025-05-19Bibliographically approved

Open Access in DiVA

fulltext(3751 kB)121 downloads
File information
File name FULLTEXT02.pdfFile size 3751 kBChecksum SHA-512
592aab3c3743e1adc210dd71dc2e6b02ed33c092918f8ee2b4e565a21b6baac3b50aece1c3e1a2bb9dae6fd2f0b10097cbf595e1a159ce6f809e234c46d90ae5
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Meyers, CharlesLöfstedt, TommyElmroth, Erik

Search in DiVA

By author/editor
Meyers, CharlesLöfstedt, TommyElmroth, Erik
By organisation
Department of Computing Science
In the same journal
Artificial Intelligence Review
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 194 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 615 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf