Umeå universitets logga

umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Enhancing vulnerable class robustness in adversarial machine learning
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.ORCID-id: 0000-0002-8078-5172
2024 (Engelska)Ingår i: IJCNN 2024: conference proceedings / [ed] Sansanee Auephanwiriyakul; Yi Mei; Toshihisa Tanaka, Institute of Electrical and Electronics Engineers (IEEE), 2024Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Recent advancements in machine learning, particularly in the field of adversarial training, have underscored the necessity for models that are not only accurate but also robust against adversarial attacks. However, a critical challenge lies in the unequal vulnerability of different classes within these models, often leading to a 'weakest-link' phenomenon where certain classes are more susceptible to adversarial perturbations. This paper presents a novel approach to improving the robustness of machine learning models against adversarial attacks, focusing on classes identified as vulnerable. The core of our methodology involves generating adversarial examples specifically tailored to these vulnerable classes, a process we term class-specific perturbations. We integrate this adversarial training into the learning process, aiming to enhance the model's robustness specifically for these at-risk classes. Motivated by this finding, we propose Balanced Adversarial Training (BAT) to facilitate adversarial training for the vulnerable class. Experimental evaluations across different datasets show that our proposed framework not only improves the robustness of the vulnerable class against adversarial attacks but also maintains overall model performance. Our work highlights the importance of moving beyond average accuracy, which is particularly important in safety-critical applications.

Ort, förlag, år, upplaga, sidor
Institute of Electrical and Electronics Engineers (IEEE), 2024.
Serie
IEEE International Joint Conference on Neural Networks (IJCNN), ISSN 2161-4393
Nyckelord [en]
Adversarial Training, Adversarial Perturbation, Robust Optimization
Nationell ämneskategori
Annan data- och informationsvetenskap Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:umu:diva-242761DOI: 10.1109/IJCNN60899.2024.10650931ISI: 001392668201099Scopus ID: 2-s2.0-85204947786ISBN: 979-8-3503-5932-9 (tryckt)ISBN: 979-8-3503-5931-2 (digital)OAI: oai:DiVA.org:umu-242761DiVA, id: diva2:1987833
Konferens
International Joint Conference on Neural Networks (IJCNN), 30-juni till den 5 juli 2024, Yokohama, JAPAN
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Tillgänglig från: 2025-08-08 Skapad: 2025-08-08 Senast uppdaterad: 2025-08-08Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Patil, MinalFrämling, Kary

Sök vidare i DiVA

Av författaren/redaktören
Patil, MinalFrämling, Kary
Av organisationen
Institutionen för datavetenskap
Annan data- och informationsvetenskapDatavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetricpoäng

doi
isbn
urn-nbn
Totalt: 39 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf