Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Making fairness actionable
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-7383-0529
2024 (English)Licentiate thesis, comprehensive summary (Other academic)Alternative title
Att göra rättvisa handlingsbart (Swedish)
Abstract [en]

The opaque nature of machine learning systems has raised concerns about whether these systems can guarantee fairness. In addition, ensuring fair decision-making requires that multiple perspectives of fairness be considered.

Currently, there is no agreement on the definitions, the facilitation of shared interpretation is difficult, and there is a lack of a unified formal language to describe them. Current definitions are implicit in the operationalization of systems, making them difficult to compare.

In this thesis, we discuss how to make fairness actionable, providing concrete tools for that. We provide not only conceptual elements to model and abstract problems of fairness, but also a technical framework and a description language.

Abstract [sv]

Den opaka naturen hos maskininlärningssystem väcker oro kring om dessa system kan garantera rättvisa. Dessutom kräver rättvis beslutsfattande att flera perspektiv på rättvisa beaktas.

För närvarande finns det ingen enighet kring definitionerna, vilket försvårar delad tolkning, och det saknas ett enhetligt formellt språk för att beskriva dem. Nuvarande definitioner är inbyggda i hur systemen används, vilket gör dem svåra att jämföra.

I denna avhandling diskuterar vi hur rättvisa kan göras praktiskt tillämpbar och tillhandahåller konkreta verktyg för detta. Vi erbjuder både konceptuella element för att modellera och abstrahera rättviseproblem samt en teknisk ram och ett beskrivningsspråk.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2024. , p. 24
Series
Report / UMINF, ISSN 0348-0542 ; 24.12
Keywords [en]
Algorithmic fairness, Ethics in artificial intelligence, Formal representation of fairness, Formal verification, Functional languages, Human-centered programming languages, Responsible artificial intelligence
National Category
Computer Sciences Software Engineering Ethics
Research subject
Computer Science; Computer Systems; Ethics
Identifiers
URN: urn:nbn:se:umu:diva-232384ISBN: 9789180705356 (electronic)ISBN: 9789180705349 (print)OAI: oai:DiVA.org:umu-232384DiVA, id: diva2:1916851
Presentation
2024-12-13, MIT.A.121, MIT-huset, Campustorget 5, Umeå, Sweden, 13:00 (English)
Opponent
Supervisors
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2024-12-03 Created: 2024-11-28 Last updated: 2024-12-03Bibliographically approved
List of papers
1. Ethical implications of fairness interventions: what might be hidden behind engineering choices?
Open this publication in new window or tab >>Ethical implications of fairness interventions: what might be hidden behind engineering choices?
2022 (English)In: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 24, no 1, article id 12Article in journal (Refereed) Published
Abstract [en]

The importance of fairness in machine learning models is widely acknowledged, and ongoing academic debate revolves around how to determine the appropriate fairness definition, and how to tackle the trade-off between fairness and model performance. In this paper we argue that besides these concerns, there can be ethical implications behind seemingly purely technical choices in fairness interventions in a typical model development pipeline. As an example we show that the technical choice between in-processing and post-processing is not necessarily value-free and may have serious implications in terms of who will be affected by the specific fairness intervention. The paper reveals how assessing the technical choices in terms of their ethical consequences can contribute to the design of fair models and to the related societal discussions.

Place, publisher, year, edition, pages
Springer, 2022
Keywords
AI Ethics, Bias mitigation, Fairness, Responsible AI
National Category
Computer Sciences Ethics
Identifiers
urn:nbn:se:umu:diva-193063 (URN)10.1007/s10676-022-09636-z (DOI)000762316100003 ()2-s2.0-85125646943 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 952026
Available from: 2022-03-21 Created: 2022-03-21 Last updated: 2024-11-28Bibliographically approved
2. ACROCPoLis: a descriptive framework for making sense of fairness
Open this publication in new window or tab >>ACROCPoLis: a descriptive framework for making sense of fairness
Show others...
2023 (English)In: FAccT '23: Proceedings of the 2023 ACM conference on fairness, accountability, and transparency, ACM Digital Library, 2023, p. 1014-1025Conference paper, Published paper (Refereed)
Abstract [en]

Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities. We therefore want to take the focus away from definitions and allow for the inclusion of societal and relational aspects to represent how the effects of AI systems impact and are experienced by individuals and social groups. In this paper, we do this by means of proposing the ACROCPoLis framework to represent allocation processes with a modeling emphasis on fairness aspects. The framework provides a shared vocabulary in which the factors relevant to fairness assessments for different situations and procedures are made explicit, as well as their interrelationships. This enables us to compare analogous situations, to highlight the differences in dissimilar situations, and to capture differing interpretations of the same situation by different stakeholders.

Place, publisher, year, edition, pages
ACM Digital Library, 2023
Keywords
Algorithmic fairness; socio-technical processes; social impact of AI; responsible AI
National Category
Information Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-209705 (URN)10.1145/3593013.3594059 (DOI)001062819300088 ()2-s2.0-85163594710 (Scopus ID)978-1-4503-7252-7 (ISBN)
Conference
2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, Illinois, USA, June 12-15, 2023
Available from: 2023-06-13 Created: 2023-06-13 Last updated: 2025-04-24Bibliographically approved
3. A clearer view on fairness: visual and formal representations for comparative analysis
Open this publication in new window or tab >>A clearer view on fairness: visual and formal representations for comparative analysis
2024 (English)In: 14th Scandinavian Conference on Artificial Intelligence, SCAI 2024: June 10-11, 2024, Jönköping, Sweden / [ed] Florian Westphal; Einav Peretz-Andersson; Maria Riveiro; Kerstin Bach; Fredrik Heintz, Jönköping University , 2024, p. 112-120Conference paper, Published paper (Refereed)
Abstract [en]

The opaque nature of machine learning systems has raised concerns about whether these systems can guarantee fairness. Furthermore, ensuring fair decision making requires the consideration of multiple perspectives on fairness. 

At the moment, there is no agreement on the definitions of fairness, achieving shared interpretations is difficult, and there is no unified formal language to describe them. Current definitions are implicit in the operationalization of systems, making their comparison difficult.

In this paper, we propose a framework for specifying formal representations of fairness that allows instantiating, visualizing, and comparing different interpretations of fairness. Our framework provides a meta-model for comparative analysis. We present several examples that consider different definitions of fairness, as well as an open-source implementation that uses the object-oriented functional language Soda.

Place, publisher, year, edition, pages
Jönköping University, 2024
Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 208
Keywords
Responsible artificial intelligence, Ethics in artificial intelligence, Formal representation of fairness
National Category
Software Engineering Computer Sciences
Research subject
Computer Science; Ethics
Identifiers
urn:nbn:se:umu:diva-232255 (URN)10.3384/ecp208013 (DOI)9789180757096 (ISBN)
Conference
14th Scandinavian Conference on Artificial Intelligence, Jönköping, Sweden, June 10-11, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-11-27 Created: 2024-11-27 Last updated: 2024-12-02Bibliographically approved
4. Soda: an object-oriented functional language for specifying human-centered problems
Open this publication in new window or tab >>Soda: an object-oriented functional language for specifying human-centered problems
(English)Manuscript (preprint) (Other academic)
Abstract [en]

We present Soda (Symbolic Objective Descriptive Analysis), a language that helps to treat qualities and quantities in a natural way and greatly simplifies the task of checking their correctness.       

We present key properties for the language motivated by the design of a descriptive language to encode complex requirements on computer systems, and we explain how these key properties must be addressed to model these requirements with simple definitions.       

We give an overview of a tool that helps to describe problems in an easy way that we consider more transparent and less error-prone.

Keywords
Responsible artificial intelligence, Functional languages, Object-oriented languages, Human-centered programming languages
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-232076 (URN)10.48550/arXiv.2310.01961 (DOI)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-11-27 Created: 2024-11-27 Last updated: 2024-11-28Bibliographically approved
5. Can proof assistants verify multi-agent systems?
Open this publication in new window or tab >>Can proof assistants verify multi-agent systems?
2024 (English)Conference paper, Oral presentation only (Refereed)
Abstract [en]

This paper presents the Soda language for verifying multi-agent systems. Soda is a high-level functional and object-oriented language that supports the compilation of its code not only to Scala, a strongly statically typed high-level programming language, but also to Lean, a proof assistant and programming language. Given these capabilities, Soda can implement multi-agent systems, or parts thereof, that can then be integrated into a mainstream software ecosystem on the one hand and formally verified with state-of-the-art tools on the other hand.

We provide a brief and informal introduction to Soda and the aforementioned interoperability capabilities, as well as a simple demonstration of how interaction protocols can be designed and verified with Soda. In the course of the demonstration, we highlight challenges with respect to real-world applicability.

Keywords
Engineering Multi-Agent Systems, Formal Verification, Proof Automation
National Category
Computer Sciences Computer Engineering Computer Systems
Research subject
Computer Science; Computer Systems; Mathematical Logic
Identifiers
urn:nbn:se:umu:diva-232383 (URN)
Conference
21st European Conference on Multi-Agent Systems, EUMAS 2024, Dublin, Ireland, August 26-28, 2024
Available from: 2024-11-28 Created: 2024-11-28 Last updated: 2025-02-04

Open Access in DiVA

fulltext(395 kB)76 downloads
File information
File name FULLTEXT01.pdfFile size 395 kBChecksum SHA-512
9afe9cdc5b28921d6987d9e8a2f70a475fe09f4940b1257cc273a2f598c2f03c3363dcf8e568b566228258662d07b85eb91c95f973d0b124dcd371b9b92ec139
Type fulltextMimetype application/pdf

Other links

Full text with appended papers

Authority records

Mendez, Julian Alfredo

Search in DiVA

By author/editor
Mendez, Julian Alfredo
By organisation
Department of Computing Science
Computer SciencesSoftware EngineeringEthics

Search outside of DiVA

GoogleGoogle Scholar
Total: 76 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 400 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf