Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Patil, Minal
Alternative names
Publications (4 of 4) Show all publications
Patil, M. S., Ung, G. & Nyberg, M. (2025). Towards specification-driven LLM-based generation of embedded automotive software. In: Bernhard Steffen (Ed.), Bridging the gap between AI and reality: second international conference, AISoLA 2024, Crete, Greece, October 30 – November 3, 2024, proceedings. Paper presented at 2nd International Conference on Bridging the Gap Between AI and Reality, AISoLA 2024, Crete, Greece, October 30 - November 3, 2024 (pp. 125-144). Springer
Open this publication in new window or tab >>Towards specification-driven LLM-based generation of embedded automotive software
2025 (English)In: Bridging the gap between AI and reality: second international conference, AISoLA 2024, Crete, Greece, October 30 – November 3, 2024, proceedings / [ed] Bernhard Steffen, Springer, 2025, p. 125-144Conference paper, Published paper (Refereed)
Abstract [en]

The paper studies how code generation by LLMs can be combined with formal verification to produce critical embedded software. The first contribution is a general framework, spec2code, in which LLMs are combined with different types of critics that produce feedback for iterative backprompting and fine-tuning. The second contribution presents a first feasibility study, where a minimalistic instantiation of spec2code, without iterative backprompting and fine-tuning, is empirically evaluated using three industrial case studies from the heavy vehicle manufacturer Scania. The goal is to automatically generate industrial-quality code from specifications only. Different combinations of formal ACSL specifications and natural language specifications are explored. The results indicate that formally correct code can be generated even without the application of iterative backprompting and fine-tuning.

Place, publisher, year, edition, pages
Springer, 2025
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 15217
Keywords
Automated Software Engineering, Code Generation, Formal Verification, Large Language Models
National Category
Software Engineering Artificial Intelligence
Identifiers
urn:nbn:se:umu:diva-234883 (URN)10.1007/978-3-031-75434-0_9 (DOI)2-s2.0-85215782156 (Scopus ID)9783031754333 (ISBN)9783031754340 (ISBN)
Conference
2nd International Conference on Bridging the Gap Between AI and Reality, AISoLA 2024, Crete, Greece, October 30 - November 3, 2024
Available from: 2025-02-07 Created: 2025-02-07 Last updated: 2025-02-07Bibliographically approved
Patil, M. & Främling, K. (2024). Enhancing vulnerable class robustness in adversarial machine learning. In: Sansanee Auephanwiriyakul; Yi Mei; Toshihisa Tanaka (Ed.), IJCNN 2024: conference proceedings. Paper presented at International Joint Conference on Neural Networks (IJCNN), 30-juni till den 5 juli 2024, Yokohama, JAPAN. Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Enhancing vulnerable class robustness in adversarial machine learning
2024 (English)In: IJCNN 2024: conference proceedings / [ed] Sansanee Auephanwiriyakul; Yi Mei; Toshihisa Tanaka, Institute of Electrical and Electronics Engineers (IEEE), 2024Conference paper, Published paper (Refereed)
Abstract [en]

Recent advancements in machine learning, particularly in the field of adversarial training, have underscored the necessity for models that are not only accurate but also robust against adversarial attacks. However, a critical challenge lies in the unequal vulnerability of different classes within these models, often leading to a 'weakest-link' phenomenon where certain classes are more susceptible to adversarial perturbations. This paper presents a novel approach to improving the robustness of machine learning models against adversarial attacks, focusing on classes identified as vulnerable. The core of our methodology involves generating adversarial examples specifically tailored to these vulnerable classes, a process we term class-specific perturbations. We integrate this adversarial training into the learning process, aiming to enhance the model's robustness specifically for these at-risk classes. Motivated by this finding, we propose Balanced Adversarial Training (BAT) to facilitate adversarial training for the vulnerable class. Experimental evaluations across different datasets show that our proposed framework not only improves the robustness of the vulnerable class against adversarial attacks but also maintains overall model performance. Our work highlights the importance of moving beyond average accuracy, which is particularly important in safety-critical applications.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Series
IEEE International Joint Conference on Neural Networks (IJCNN), ISSN 2161-4393
Keywords
Adversarial Training, Adversarial Perturbation, Robust Optimization
National Category
Other Computer and Information Science Computer Sciences
Identifiers
urn:nbn:se:umu:diva-242761 (URN)10.1109/IJCNN60899.2024.10650931 (DOI)001392668201099 ()2-s2.0-85204947786 (Scopus ID)979-8-3503-5932-9 (ISBN)979-8-3503-5931-2 (ISBN)
Conference
International Joint Conference on Neural Networks (IJCNN), 30-juni till den 5 juli 2024, Yokohama, JAPAN
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2025-08-08 Created: 2025-08-08 Last updated: 2025-08-08Bibliographically approved
Patil, M. & Främling, K. (2023). Do intermediate feature coalitions aid explainability of black-box models?. In: Luca Longo (Ed.), Explainable Artificial Intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part I. Paper presented at xAI 2023: Explainable Artificial Intelligence, Lisbon, Portugal, July 26-28, 2023 (pp. 115-130). Cham: Springer
Open this publication in new window or tab >>Do intermediate feature coalitions aid explainability of black-box models?
2023 (English)In: Explainable Artificial Intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part I / [ed] Luca Longo, Cham: Springer, 2023, p. 115-130Conference paper, Published paper (Refereed)
Abstract [en]

This work introduces the notion of intermediate concepts based on levels structure to aid explainability for black-box models. The levels structure is a hierarchical structure in which each level corresponds to features of a dataset (i.e., a player-set partition). The level of coarseness increases from the trivial set, which only comprises singletons, to the set, which only contains the grand coalition. In addition, it is possible to establish meronomies, i.e., part-whole relationships, via a domain expert that can be utilised to generate explanations at an abstract level. We illustrate the usability of this approach in a real-world car model example and the Titanic dataset, where intermediate concepts aid in explainability at different levels of abstraction.

Place, publisher, year, edition, pages
Cham: Springer, 2023
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 1901
Keywords
Coalition Formation, Explainability, Trust in Human-Agent Systems
National Category
Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-216079 (URN)10.1007/978-3-031-44064-9_7 (DOI)001286482300007 ()2-s2.0-85176954534 (Scopus ID)9783031440632 (ISBN)9783031440649 (ISBN)
Conference
xAI 2023: Explainable Artificial Intelligence, Lisbon, Portugal, July 26-28, 2023
Funder
Knut and Alice Wallenberg Foundation, 570011440
Available from: 2023-11-01 Created: 2023-11-01 Last updated: 2025-04-24Bibliographically approved
Patil, M. S. & Främling, K. (2023). Investigating lipschitz constants in neural ensemble models to improve adversarial robustness. In: ICSRS 2023: 7th international conference on system reliability and safety. Paper presented at 7th International Conference on System Reliability and Safety, ICSRS 2023, Bologna, 22-24 November 2023 (pp. 434-438). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Investigating lipschitz constants in neural ensemble models to improve adversarial robustness
2023 (English)In: ICSRS 2023: 7th international conference on system reliability and safety, Institute of Electrical and Electronics Engineers (IEEE), 2023, p. 434-438Conference paper, Published paper (Refereed)
Abstract [en]

This work investigates the relationship between adversarial robustness and the local Lipschitz constant in ensemble neural network frameworks, namely bagging and stacking. Capitalising on this, we introduce an ensemble neural network design that improves both accuracy and adversarial resilience. We theoretically obtain the local Lipschitz constants for both ensembles, offering insights into their susceptibility to adversarial attacks and identifying architectures optimal for adversarial defense. Notably, our approach negates the need for specific adversarial attack and accommodates any number of pre-trained networks for an ensemble architecture. Evaluations on the MNIST and CIFAR-10 datasets against white-box attacks, specifically FGSM and PGD, show our approach is adversarially robust compared to standalone networks and vanilla ensemble architectures.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2023
Keywords
Adversarial Robustness, Certification, Ensemble Methods, Lipschitz constant, Neural Network
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-220754 (URN)10.1109/ICSRS59833.2023.10381066 (DOI)2-s2.0-85183474111 (Scopus ID)9798350306057 (ISBN)9798350306040 (ISBN)
Conference
7th International Conference on System Reliability and Safety, ICSRS 2023, Bologna, 22-24 November 2023
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Knut and Alice Wallenberg FoundationInterreg
Available from: 2024-02-12 Created: 2024-02-12 Last updated: 2024-02-13Bibliographically approved
Organisations

Search in DiVA

Show all publications