Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Dignum, Virginia, ProfessorORCID iD iconorcid.org/0000-0001-7409-5813
Publications (10 of 66) Show all publications
Pedreschi, D., Pappalardo, L., Ferragina, E., Baeza-Yates, R., Barabási, A.-L., Dignum, F., . . . Vespignani, A. (2025). Human-AI coevolution. Artificial Intelligence, 339, Article ID 104244.
Open this publication in new window or tab >>Human-AI coevolution
Show others...
2025 (English)In: Artificial Intelligence, ISSN 0004-3702, E-ISSN 1872-7921, Vol. 339, article id 104244Article, review/survey (Refereed) Published
Abstract [en]

Human-AI coevolution, defined as a process in which humans and AI algorithms continuously influence each other, increasingly characterises our society, but is understudied in artificial intelligence and complexity science literature. Recommender systems and assistants play a prominent role in human-AI coevolution, as they permeate many facets of daily life and influence human choices through online platforms. The interaction between users and AI results in a potentially endless feedback loop, wherein users' choices generate data to train AI models, which, in turn, shape subsequent user preferences. This human-AI feedback loop has peculiar characteristics compared to traditional human-machine interaction and gives rise to complex and often “unintended” systemic outcomes. This paper introduces human-AI coevolution as the cornerstone for a new field of study at the intersection between AI and complexity science focused on the theoretical, empirical, and mathematical investigation of the human-AI feedback loop. In doing so, we: (i) outline the pros and cons of existing methodologies and highlight shortcomings and potential ways for capturing feedback loop mechanisms; (ii) propose a reflection at the intersection between complexity science, AI and society; (iii) provide real-world examples for different human-AI ecosystems; and (iv) illustrate challenges to the creation of such a field of study, conceptualising them at increasing levels of abstraction, i.e., scientific, legal and socio-political.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Artificial intelligence, Complex systems, Computational social science, Human-AI coevolution
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-232122 (URN)10.1016/j.artint.2024.104244 (DOI)001359648100001 ()2-s2.0-85209118417 (Scopus ID)
Funder
European CommissionEU, Horizon 2020, 952026EU, Horizon 2020, 871042EU, European Research Council, ERC-2018-ADG 834756
Available from: 2024-11-27 Created: 2024-11-27 Last updated: 2025-04-24Bibliographically approved
Dignum, V., Dignum, F., Fjaestad, M. & Tucker, J. (2025). Submission to the United Nations to identify the terms of reference and modalities for the establishment and functioning of the Independent International Scientific Panel on AI and Global Dialogue on AI Governance. Umeå University
Open this publication in new window or tab >>Submission to the United Nations to identify the terms of reference and modalities for the establishment and functioning of the Independent International Scientific Panel on AI and Global Dialogue on AI Governance
2025 (English)Report (Other (popular science, discussion, etc.))
Abstract [en]

The United Nations (UN) put out a questionnaire and request for public comment on a proposal to form an Independent International Scientific Panel on Artificial Intelligence (AI), as well as a Global Dialogue on AI in the context of the Global Digital Compact. The questionnaire was sent out in late 2024, and the following brief report forms the response submitted by the AI Policy Lab at Umeå University, Sweden.

Place, publisher, year, edition, pages
Umeå University, 2025. p. 12
Keywords
Independent International Scientific Panel on Artificial Intelligence (AI);Global Dialogue on AI; United Nations; Responsible AI
National Category
Computer Sciences Political Science Law
Identifiers
urn:nbn:se:umu:diva-239528 (URN)
Available from: 2025-06-03 Created: 2025-06-03 Last updated: 2025-06-03Bibliographically approved
Gaffney, O., Luers, A., Carrero-Martinez, F., Oztekin-Gunaydin, B., Creutzig, F., Dignum, V., . . . Takahashi Guevara, K. (2025). The Earth alignment principle for artificial intelligence. Nature Sustainability, Article ID 233.
Open this publication in new window or tab >>The Earth alignment principle for artificial intelligence
Show others...
2025 (English)In: Nature Sustainability, E-ISSN 2398-9629, article id 233Article in journal, Editorial material (Refereed) Epub ahead of print
Abstract [en]

At a time when the world must cut greenhouse gas emissions precipitously, artificial intelligence (AI) brings large opportunities and large risks. To address its uncertain environmental impact, we propose the ‘Earth alignment’ principle to guide AI development and deployment towards planetary stability.

Place, publisher, year, edition, pages
Springer Nature, 2025
National Category
Environmental Sciences Artificial Intelligence
Identifiers
urn:nbn:se:umu:diva-237584 (URN)10.1038/s41893-025-01536-6 (DOI)001455801600001 ()2-s2.0-105001875601 (Scopus ID)
Available from: 2025-04-24 Created: 2025-04-24 Last updated: 2025-04-24
Mendez, J. A., Kampik, T., Aler Tubella, A. & Dignum, V. (2024). A clearer view on fairness: visual and formal representations for comparative analysis. In: Florian Westphal; Einav Peretz-Andersson; Maria Riveiro; Kerstin Bach; Fredrik Heintz (Ed.), 14th Scandinavian Conference on Artificial Intelligence, SCAI 2024: June 10-11, 2024, Jönköping, Sweden. Paper presented at 14th Scandinavian Conference on Artificial Intelligence, Jönköping, Sweden, June 10-11, 2024 (pp. 112-120). Jönköping University
Open this publication in new window or tab >>A clearer view on fairness: visual and formal representations for comparative analysis
2024 (English)In: 14th Scandinavian Conference on Artificial Intelligence, SCAI 2024: June 10-11, 2024, Jönköping, Sweden / [ed] Florian Westphal; Einav Peretz-Andersson; Maria Riveiro; Kerstin Bach; Fredrik Heintz, Jönköping University , 2024, p. 112-120Conference paper, Published paper (Refereed)
Abstract [en]

The opaque nature of machine learning systems has raised concerns about whether these systems can guarantee fairness. Furthermore, ensuring fair decision making requires the consideration of multiple perspectives on fairness. 

At the moment, there is no agreement on the definitions of fairness, achieving shared interpretations is difficult, and there is no unified formal language to describe them. Current definitions are implicit in the operationalization of systems, making their comparison difficult.

In this paper, we propose a framework for specifying formal representations of fairness that allows instantiating, visualizing, and comparing different interpretations of fairness. Our framework provides a meta-model for comparative analysis. We present several examples that consider different definitions of fairness, as well as an open-source implementation that uses the object-oriented functional language Soda.

Place, publisher, year, edition, pages
Jönköping University, 2024
Series
Linköping Electronic Conference Proceedings, ISSN 1650-3686, E-ISSN 1650-3740 ; 208
Keywords
Responsible artificial intelligence, Ethics in artificial intelligence, Formal representation of fairness
National Category
Software Engineering Computer Sciences
Research subject
Computer Science; Ethics
Identifiers
urn:nbn:se:umu:diva-232255 (URN)10.3384/ecp208013 (DOI)9789180757096 (ISBN)
Conference
14th Scandinavian Conference on Artificial Intelligence, Jönköping, Sweden, June 10-11, 2024
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-11-27 Created: 2024-11-27 Last updated: 2024-12-02Bibliographically approved
Methnani, L., Dignum, V. & Theodorou, A. (2024). Clash of the explainers: argumentation for context-appropriate explanations. In: Sławomir Nowaczyk; Przemysław Biecek; Neo Christopher Chung; Mauro Vallati; Paweł Skruch; Joanna Jaworek-Korjakowska; Simon Parkinson; Alexandros Nikitas; Martin Atzmüller; Tomáš Kliegr; Ute Schmid; Szymon Bobek; Nada Lavrac; Marieke Peeters; Roland van Dierendonck; Saskia Robben; Eunika Mercier-Laurent; Gülgün Kayakutlu; Mieczyslaw Lech Owoc; Karl Mason; Abdul Wahid; Pierangela Bruno; Francesco Calimeri; Francesco Cauteruccio; Giorgio Terracina; Diedrich Wolter; Jochen L. Leidner; Michael Kohlhase; Vania Dimitrova (Ed.), Artificial Intelligence. ECAI 2023: XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, Kraków, Poland, September 30 – October 4, 2023, Proceedings, Part I. Paper presented at International Workshops of the 26th European Conference on Artificial Intelligence, ECAI 2023 (pp. 7-23). Springer
Open this publication in new window or tab >>Clash of the explainers: argumentation for context-appropriate explanations
2024 (English)In: Artificial Intelligence. ECAI 2023: XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, Kraków, Poland, September 30 – October 4, 2023, Proceedings, Part I / [ed] Sławomir Nowaczyk; Przemysław Biecek; Neo Christopher Chung; Mauro Vallati; Paweł Skruch; Joanna Jaworek-Korjakowska; Simon Parkinson; Alexandros Nikitas; Martin Atzmüller; Tomáš Kliegr; Ute Schmid; Szymon Bobek; Nada Lavrac; Marieke Peeters; Roland van Dierendonck; Saskia Robben; Eunika Mercier-Laurent; Gülgün Kayakutlu; Mieczyslaw Lech Owoc; Karl Mason; Abdul Wahid; Pierangela Bruno; Francesco Calimeri; Francesco Cauteruccio; Giorgio Terracina; Diedrich Wolter; Jochen L. Leidner; Michael Kohlhase; Vania Dimitrova, Springer, 2024, p. 7-23Conference paper, Published paper (Refereed)
Abstract [en]

Understanding when and why to apply any given eXplainable Artificial Intelligence (XAI) technique is not a straightforward task. There is no single approach that is best suited for a given context. This paper aims to address the challenge of selecting the most appropriate explainer given the context in which an explanation is required. For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation. If—in general—no single explanation technique surpasses the rest, then reasoning over the available methods is required in order to select one that is context-appropriate. Due to the transparency they afford, we propose employing argumentation techniques to reach an agreement over the most suitable explainers from a given set of possible explainers.

In this paper, we propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest. By formalizing supporting premises—and inferences—we can map stakeholder characteristics to those of explanation techniques. This allows us to reason over the techniques and prioritise the best one for the given context, while also offering transparency into the selection decision.

Place, publisher, year, edition, pages
Springer, 2024
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937
Keywords
Argumentation, Explainability, Transparency
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-221005 (URN)10.1007/978-3-031-50396-2_1 (DOI)001259329400001 ()2-s2.0-85184098368 (Scopus ID)978-3-031-50395-5 (ISBN)978-3-031-50396-2 (ISBN)
Conference
International Workshops of the 26th European Conference on Artificial Intelligence, ECAI 2023
Available from: 2024-03-06 Created: 2024-03-06 Last updated: 2025-04-24Bibliographically approved
Brännström, A., Dignum, V. & Nieves, J. C. (2024). Goal-hiding information-seeking dialogues: a formal framework. International Journal of Approximate Reasoning, 177, Article ID 109325.
Open this publication in new window or tab >>Goal-hiding information-seeking dialogues: a formal framework
2024 (English)In: International Journal of Approximate Reasoning, ISSN 0888-613X, E-ISSN 1873-4731, Vol. 177, article id 109325Article in journal (Refereed) Published
Abstract [en]

We consider a type of information-seeking dialogue between a seeker agent and a respondent agent, where the seeker estimates the respondent to not be willing to share a particular set of sought-after information. Hence, the seeker postpones (hides) its goal topic, related to the respondent's sensitive information, until the respondent is perceived as willing to talk about it. In the intermediate process, the seeker opens other topics to steer the dialogue tactfully towards the goal. Such dialogue strategies, which we refer to as goal-hiding strategies, are common in diverse contexts such as criminal interrogations and medical assessments, involving sensitive topics. Conversely, in malicious online interactions like social media extortion, similar strategies might aim to manipulate individuals into revealing information or agreeing to unfavorable terms. This paper proposes a formal dialogue framework for understanding goal-hiding strategies. The dialogue framework uses Quantitative Bipolar Argumentation Frameworks (QBAFs) to assign willingness scores to topics. An initial willingness for each topic is modified by considering how topics promote (support) or demote (attack) other topics. We introduce a method to identify relations among topics by considering a respondent's shared information. Finally, we introduce a gradual semantics to estimate changes in willingness as new topics are opened. Our formal analysis and empirical evaluation show the system's compliance with privacy-preserving safety properties. A formal understanding of goal-hiding strategies opens up a range of practical applications; For instance, a seeker agent may plan with goal-hiding to enhance privacy in human-agent interactions. Similarly, an observer agent (third-party) may be designed to enhance social media security by detecting goal-hiding strategies employed by users' interlocutors.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Formal Dialogues, Quantitative Argumentation, Information Extraction, Human-Agent Interactions, Theory of Mind
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-231921 (URN)10.1016/j.ijar.2024.109325 (DOI)001360705100001 ()2-s2.0-85209369588 (Scopus ID)
Available from: 2024-11-18 Created: 2024-11-18 Last updated: 2024-12-06Bibliographically approved
Charisi, V. & Dignum, V. (2024). Operationalizing AI regulatory sandboxes for children’s rights and well-being. In: Catherine Régis; Jean-Louis Denis; Maria Luciana Axente; Atsuo Kishimoto (Ed.), Human-centered AI: a multidisciplinary perspective for policy-makers, auditors, and users (pp. 231-249). CRC Press
Open this publication in new window or tab >>Operationalizing AI regulatory sandboxes for children’s rights and well-being
2024 (English)In: Human-centered AI: a multidisciplinary perspective for policy-makers, auditors, and users / [ed] Catherine Régis; Jean-Louis Denis; Maria Luciana Axente; Atsuo Kishimoto, CRC Press, 2024, p. 231-249Chapter in book (Refereed)
Abstract [en]

Children and youth are surrounded by AI in many of the products they use in their daily lives, from social media to education technology, video games, smart toys, and voice assistants. AI can affect the videos children watch online, their curriculum as they learn, and the way they play, and interact with others. For the purposes of this chapter, we adopt UNICEF’s definition of AI. AI can be used to facilitate children’s development and empower them in their activities, but children are also vulnerable to potential risks posed by AI, including bias, cybersecurity, data protection and privacy, and lack of accessibility. AI must be designed to respect the rights of the child user and to provide equal opportunities for all children. Several organizations, including UNICEF and the World Economic Forum (WEF), have developed guidelines for child-centric AI, but the challenge of designing and implementing responsible and trusted child-centered AI remains complex to address. In this chapter, we describe challenges and propose a regulatory sandbox environment as a means to address children, businesses, regulations, and society as a whole. For the rest of the chapter, whenever we use the term “sandboxes,” we refer to “regulatory sandboxes.”.

Place, publisher, year, edition, pages
CRC Press, 2024
Series
Artificial Intelligence and Robotics Series
National Category
Human Computer Interaction
Identifiers
urn:nbn:se:umu:diva-226970 (URN)10.1201/9781003320791-25 (DOI)2-s2.0-85195951940 (Scopus ID)978-1-003-32079-1 (ISBN)978-1-032-34161-3 (ISBN)978-1-032-34162-0 (ISBN)
Available from: 2024-06-24 Created: 2024-06-24 Last updated: 2024-06-24Bibliographically approved
Methnani, L., Dahlgren Lindström, A. & Dignum, V. (2024). The impact of mixed-initiative on collaboration in hybrid AI. In: Fabian Lorig; Jason Tucker; Adam Dahlgren Lindström; Frank Dignum; Pradeep Murukannaiah; Andreas Theodorou; Pınar Yolum (Ed.), HHAI 2024: hybrid human AI systems for the social good: proceedings of the third international conference on hybrid human-artificial intelligence. Paper presented at 3rd International Conference on Hybrid Human-Artificial Intelligence, HHAI 2024, Hybrid, Malmö, Sweden, June 10-14, 2024 (pp. 469-471). Amsterdam: IOS Press
Open this publication in new window or tab >>The impact of mixed-initiative on collaboration in hybrid AI
2024 (English)In: HHAI 2024: hybrid human AI systems for the social good: proceedings of the third international conference on hybrid human-artificial intelligence / [ed] Fabian Lorig; Jason Tucker; Adam Dahlgren Lindström; Frank Dignum; Pradeep Murukannaiah; Andreas Theodorou; Pınar Yolum, Amsterdam: IOS Press, 2024, p. 469-471Conference paper, Published paper (Refereed)
Abstract [en]

This paper explores the integration of mixed-initiative systems in human-AI teams to improve coordination and communication in Search and Rescue (SAR) scenarios, leveraging dynamic control sharing to enhance operational effectiveness.

Place, publisher, year, edition, pages
Amsterdam: IOS Press, 2024
Series
Frontiers in Artificial Intelligence and Applications, ISSN 0922-6389, E-ISSN 1879-8314 ; 386
Keywords
Human-AI interaction, mixed-initiative systems, search and rescue, team coordination
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:umu:diva-228000 (URN)10.3233/FAIA240227 (DOI)2-s2.0-85198757074 (Scopus ID)9781643685229 (ISBN)
Conference
3rd International Conference on Hybrid Human-Artificial Intelligence, HHAI 2024, Hybrid, Malmö, Sweden, June 10-14, 2024
Available from: 2024-07-22 Created: 2024-07-22 Last updated: 2025-02-18Bibliographically approved
Stockinger, E., Maas, J., Talvitie, C. & Dignum, V. (2024). Trustworthiness of voting advice applications in Europe. Ethics and Information Technology, 26(3), Article ID 55.
Open this publication in new window or tab >>Trustworthiness of voting advice applications in Europe
2024 (English)In: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 26, no 3, article id 55Article in journal (Refereed) Published
Abstract [en]

Voting Advice Applications (VAAs) are interactive tools used to assist in one’s choice of a party or candidate to vote for in an upcoming election. They have the potential to increase citizens’ trust and participation in democratic structures. However, there is no established ground truth for one’s electoral choice, and VAA recommendations depend strongly on architectural and design choices. We assessed several representative European VAAs according to the Ethics Guidelines for Trustworthy AI provided by the European Commission using publicly available information. We found scores to be comparable across VAAs and low in most requirements, with differences reflecting the kind of developing institution. Across VAAs, we identify the need for improvement in (i) transparency regarding the subjectivity of recommendations, (ii) diversity of stakeholder participation, (iii) user-centric documentation of algorithm, and (iv) disclosure of the underlying values and assumptions.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
AI ethics, AI governance, Design for values, Responsible AI, Socio-technical systems, Voting advice applications
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-228805 (URN)10.1007/s10676-024-09790-6 (DOI)001288319500002 ()2-s2.0-85201274472 (Scopus ID)
Projects
EU H2020 ICT48 “Humane AI Net”
Funder
EU, Horizon 2020, 952026
Available from: 2024-08-28 Created: 2024-08-28 Last updated: 2024-08-28Bibliographically approved
Methnani, L., Chiou, M., Dignum, V. & Theodorou, A. (2024). Who's in charge here? a survey on trustworthy AI in variable autonomy robotic systems. ACM Computing Surveys, 56(7), Article ID 184.
Open this publication in new window or tab >>Who's in charge here? a survey on trustworthy AI in variable autonomy robotic systems
2024 (English)In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 56, no 7, article id 184Article in journal (Refereed) Published
Abstract [en]

This article surveys the Variable Autonomy (VA) robotics literature that considers two contributory elements to Trustworthy AI: transparency and explainability. These elements should play a crucial role when designing and adopting robotic systems, especially in VA where poor or untimely adjustments of the system's level of autonomy can lead to errors, control conflicts, user frustration, and ultimate disuse of the system. Despite this need, transparency and explainability is, to the best of our knowledge, mostly overlooked in VA robotics literature or is not considered explicitly. In this article, we aim to present and examine the most recent contributions to the VA literature concerning transparency and explainability. In addition, we propose a way of thinking about VA by breaking these two concepts down based on: the mission of the human-robot team; who the stakeholder is; what needs to be made transparent or explained; why they need it; and how it can be achieved. Last, we provide insights and propose ways to move VA research forward. Our goal with this article is to raise awareness and inter-community discussions among the Trustworthy AI and the VA robotics communities.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
explainability, human control, transparency, trustworthy AI, Variable autonomy
National Category
Human Computer Interaction Robotics and automation
Identifiers
urn:nbn:se:umu:diva-224164 (URN)10.1145/3645090 (DOI)001208811000023 ()2-s2.0-85191097717 (Scopus ID)
Funder
Vinnova, 2021-04336Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 952026
Available from: 2024-05-16 Created: 2024-05-16 Last updated: 2025-04-24Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7409-5813

Search in DiVA

Show all publications