Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 99) Show all publications
Kampik, T. & Nieves, J. C. (2025). Disagree and commit: degrees of argumentation-based agreements. Autonomous Agents and Multi-Agent Systems, 39(1), Article ID 8.
Open this publication in new window or tab >>Disagree and commit: degrees of argumentation-based agreements
2025 (English)In: Autonomous Agents and Multi-Agent Systems, ISSN 1387-2532, E-ISSN 1573-7454, Vol. 39, no 1, article id 8Article in journal (Refereed) Published
Abstract [en]

In cooperative human decision-making, agreements are often not total; a partial degree of agreement is sufficient to commit to a decision and move on, as long as one is somewhat confident that the involved parties are likely to stand by their commitment in the future, given no drastic unexpected changes. In this paper, we introduce the notion of agreement scenarios that allow artificial autonomous agents to reach such agreements, using formal models of argumentation, in particular abstract argumentation and value-based argumentation. We introduce the notions of degrees of satisfaction and (minimum, mean, and median) agreement, as well as a measure of the impact a value in a value-based argumentation framework has on these notions. We then analyze how degrees of agreement are affected when agreement scenarios are expanded with new information, to shed light on the reliability of partial agreements in dynamic scenarios. An implementation of the introduced concepts is provided as part of an argumentation-based reasoning software library.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Formal argumentation, agreement technologies, multi-agent systems
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-235100 (URN)10.1007/s10458-025-09688-7 (DOI)001406623800001 ()2-s2.0-85218109624 (Scopus ID)
Funder
Knut and Alice Wallenberg FoundationWallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2025-02-06 Created: 2025-02-06 Last updated: 2025-03-05Bibliographically approved
Guerrero Rosero, E. & Nieves, J. C. (2025). Semantic argumentation using rewriting systems. In: Pedro Cabalar; Francesco Fabiano; Martin Gebser; Gopal Gupta; Theresa Swift (Ed.), EPTCS 416: Proceedings 40th International Conference on Logic Programming. Paper presented at 40th International Conference on Logic Programming, ICLP 2024, Dallas, USA, October 14-17, 2024 (pp. 135-138). Open Publishing Association
Open this publication in new window or tab >>Semantic argumentation using rewriting systems
2025 (English)In: EPTCS 416: Proceedings 40th International Conference on Logic Programming / [ed] Pedro Cabalar; Francesco Fabiano; Martin Gebser; Gopal Gupta; Theresa Swift, Open Publishing Association , 2025, p. 135-138Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

In this article, we introduce a general framework for structured argumentation providing consistent and well-defined justification for conclusions that can and cannot be inferred and there is certainty about them, which we call semantic and NAF-arguments, respectively. We propose the so-called semantic argumentation guaranteeing well-known principles for quality in structured argumentation, with the ability to generate semantic and NAF-arguments, those where the conclusion atoms are semantically interpreted as true, and those where the conclusion is assumed to be false. This framework is defined on the set of all logic programs in terms of rewriting systems based on a confluent set of transformation rules, the so-called Confluent Logic Programming Systems, making this approach a general framework. We implement our framework named semantic argumentation solver available open source.

Place, publisher, year, edition, pages
Open Publishing Association, 2025
Series
Electronic proceedings in theoretical computer science, ISSN 2075-2180
National Category
Computer Systems Computer Sciences
Identifiers
urn:nbn:se:umu:diva-236463 (URN)10.4204/EPTCS.416.12 (DOI)2-s2.0-85218631270 (Scopus ID)
Conference
40th International Conference on Logic Programming, ICLP 2024, Dallas, USA, October 14-17, 2024
Note

Extended abstract.

Available from: 2025-03-19 Created: 2025-03-19 Last updated: 2025-03-19Bibliographically approved
Brännström, A., Wester, J. & Nieves, J. C. (2024). A formal understanding of computational empathy in interactive agents. Cognitive Systems Research, 85, Article ID 101203.
Open this publication in new window or tab >>A formal understanding of computational empathy in interactive agents
2024 (English)In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 85, article id 101203Article in journal (Refereed) Published
Abstract [en]

Interactive software agents, such as chatbots, are progressively being used in the area of health and well-being. In such applications, where agents engage with users in interpersonal conversations for, e.g., coaching, comfort or behavior-change interventions, there is an increased need for understanding agents’ empathic capabilities. In the current state-of-the-art, there are no tools to do that. In order to understand empathic capabilities in interactive software agents, we need a precise notion of empathy. The literature discusses a variety of definitions of empathy, but there is no consensus of a formal definition. Based on a systematic literature review and a qualitative analysis of recent approaches to empathy in interactive agents for health and well-being, a formal definition—an ontology—of empathy is developed. We present the potential of the formal definition in a controlled user-study by applying it as a tool for assessing empathy in two state-of-the-art health and well-being chatbots; Replika and Wysa. Our findings suggest that our definition captures necessary conditions for assessing empathy in interactive agents, and how it can uncover and explain trends in changing perceptions of empathy over time. The definition, implemented in Web Ontology Language (OWL), may serve as an automated tool, enabling systems to recognize empathy in interactions—be it an interactive agent evaluating its own empathic performance or an intelligent system assessing the empathic capability of its interlocutors.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Computational empathy, Conversational agents, Human–agent interaction, Knowledge engineering
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-221035 (URN)10.1016/j.cogsys.2023.101203 (DOI)001177600700001 ()2-s2.0-85184060695 (Scopus ID)
Funder
Knut and Alice Wallenberg Foundation
Available from: 2024-03-05 Created: 2024-03-05 Last updated: 2025-04-24Bibliographically approved
Taverner, J., Brännström, A., Durães, D., Vivancos, E., Novais, P., Nieves, J. C. & Botti, V. (2024). Computational affective knowledge representation for agents located in a multicultural environment. Human-centric Computing and Information Sciences, 14, Article ID 30.
Open this publication in new window or tab >>Computational affective knowledge representation for agents located in a multicultural environment
Show others...
2024 (English)In: Human-centric Computing and Information Sciences, E-ISSN 2192-1962, Vol. 14, article id 30Article in journal (Refereed) Published
Abstract [en]

In this paper, we propose a new computational model for affective knowledge representation that will be used for affective agents located in a multicultural environment. To this end, we present the results of two experiments, the first of which determines the most appropriate labels to define the pleasure-arousal dimensions in the culture and language of the agent's location. As an example, we use the Portuguese and Swedish languages. The second experiment identifies the most suitable values of pleasure-arousal dimensions for each emotion expressed in these example languages. The results obtained are compared with a previous model developed for agents interacting with European Spanish-speaking people. Results show significant differences in the values of pleasure and arousal associated with emotions across languages and cultures. The results also show no significant differences in gender or age when associating levels of pleasure-arousal to emotions. We propose two applications of these representation models, such as a model of an agent capable of adapting its affective behavior to different cultural environments and a human-aware planning scenario in which the agent uses this dimensional representation to recognize the user's affective state and select the best strategy to redirect that affective state to the target state.

Place, publisher, year, edition, pages
Korea Computer Industry Association (KCIA), 2024
Keywords
Affective Computing, Human Emotion Modeling, Human-Machine Interaction, Affective Agents, Emotion Representation, Cross-Cultural Emotion Representation
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-227803 (URN)10.22967/HCIS.2024.14.030 (DOI)001206297900001 ()2-s2.0-85202292328 (Scopus ID)
Funder
EU, Horizon 2020, 952215
Available from: 2024-07-10 Created: 2024-07-10 Last updated: 2024-09-03Bibliographically approved
Nieves, J. C., Osorio, M., Rojas-Velazquez, D., Magallanes, Y. & Brännström, A. (2024). Digital companions for well-being: challenges and opportunities. Journal of Intelligent & Fuzzy Systems
Open this publication in new window or tab >>Digital companions for well-being: challenges and opportunities
Show others...
2024 (English)In: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967Article in journal (Refereed) Epub ahead of print
Abstract [en]

Humans have evolved to seek social connections, extending beyond interactions with living beings. The digitization of society has led to interactions with non-living entities, such as digital companions, aimed at supporting mental well-being. This literature review surveys the latest developments in digital companions for mental health, employing a hybrid search strategy that identified 67 relevant articles from 2014 to 2022. We identified that by the nature of the digital companions’ purposes, it is important to consider person profiles for: a) to generate both person-oriented and empathetic responses from these virtual companions, b) to keep track of the person’s conversations, activities, therapy, and progress, and c) to allow portability and compatibility between digital companions. We established a taxonomy for digital companions in the scope of mental well-being. We also identified open challenges in the scope of digital companions related to ethical, technical, and socio-technical points of view. We provided documentation about what these issues mean, and discuss possible alternatives to approach them.

Place, publisher, year, edition, pages
IOS Press, 2024
Keywords
Digital Companions, autonomous systems, Mental Well-being
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-227805 (URN)10.3233/jifs-219336 (DOI)
Available from: 2024-07-10 Created: 2024-07-10 Last updated: 2024-10-24
Bahuguna, A., Haydar, S., Brännström, A. & Nieves, J. C. (2024). Do datapoints argue?: Argumentation for hierarchical agreement in datasets. In: Sławomir Nowaczyk; Przemysław Biecek; Neo Christopher Chung; Mauro Vallati; Paweł Skruch; Joanna Jaworek-Korjakowska; Simon Parkinson; Alexandros Nikitas; Martin Atzmüller; Tomáš Kliegr; Ute Schmid; Szymon Bobek; Nada Lavrac; Marieke Peeters; Roland van Dierendonck; Saskia Robben; Eunika Mercier-Laurent; Gülgün Kayakutlu; Mieczyslaw Lech Owoc; Karl Mason; Abdul Wahid, Pierangela Bruno; Francesco Calimeri; Francesco Cauteruccio; Giorgio Terracina; Diedrich Wolter; Jochen L. Leidner; Michael Kohlhase; Vania Dimitrova (Ed.), Artificial Intelligence. ECAI 2023 International Workshops: XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, Kraków, Poland, September 30 – October 4, 2023, Proceedings, Part II. Paper presented at 2nd International Workshop on HYbrid Models for Coupling Deductive and Inductive ReAsoning (Hydra) @ ECAI-23, Kraków, Poland, September 30 - October 4, 2023 (pp. 291-303). Springer
Open this publication in new window or tab >>Do datapoints argue?: Argumentation for hierarchical agreement in datasets
2024 (English)In: Artificial Intelligence. ECAI 2023 International Workshops: XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, Kraków, Poland, September 30 – October 4, 2023, Proceedings, Part II / [ed] Sławomir Nowaczyk; Przemysław Biecek; Neo Christopher Chung; Mauro Vallati; Paweł Skruch; Joanna Jaworek-Korjakowska; Simon Parkinson; Alexandros Nikitas; Martin Atzmüller; Tomáš Kliegr; Ute Schmid; Szymon Bobek; Nada Lavrac; Marieke Peeters; Roland van Dierendonck; Saskia Robben; Eunika Mercier-Laurent; Gülgün Kayakutlu; Mieczyslaw Lech Owoc; Karl Mason; Abdul Wahid, Pierangela Bruno; Francesco Calimeri; Francesco Cauteruccio; Giorgio Terracina; Diedrich Wolter; Jochen L. Leidner; Michael Kohlhase; Vania Dimitrova, Springer, 2024, p. 291-303Conference paper, Published paper (Refereed)
Abstract [en]

This work aims to utilize quantitative bipolar argumentation to detect deception in machine learning models. We explore the concept of deception in the context of interactions of a party developing a machine learning model with potentially malformed data sources. The objective is to identify deceptive or adversarial data and assess the effectiveness of comparative analysis during different stages of model training. By modeling disagreement and agreement between data points as arguments and utilizing quantitative measures, this work proposes techniques for detecting outliers in data. We discuss further applications in clustering and uncertainty modelling.

Place, publisher, year, edition, pages
Springer, 2024
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 1948
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-216004 (URN)10.1007/978-3-031-50485-3_31 (DOI)001259355800031 ()2-s2.0-85184303025 (Scopus ID)978-3-031-50484-6 (ISBN)978-3-031-50485-3 (ISBN)
Conference
2nd International Workshop on HYbrid Models for Coupling Deductive and Inductive ReAsoning (Hydra) @ ECAI-23, Kraków, Poland, September 30 - October 4, 2023
Available from: 2023-10-30 Created: 2023-10-30 Last updated: 2025-04-24Bibliographically approved
Brännström, A., Dignum, V. & Nieves, J. C. (2024). Goal-hiding information-seeking dialogues: a formal framework. International Journal of Approximate Reasoning, 177, Article ID 109325.
Open this publication in new window or tab >>Goal-hiding information-seeking dialogues: a formal framework
2024 (English)In: International Journal of Approximate Reasoning, ISSN 0888-613X, E-ISSN 1873-4731, Vol. 177, article id 109325Article in journal (Refereed) Published
Abstract [en]

We consider a type of information-seeking dialogue between a seeker agent and a respondent agent, where the seeker estimates the respondent to not be willing to share a particular set of sought-after information. Hence, the seeker postpones (hides) its goal topic, related to the respondent's sensitive information, until the respondent is perceived as willing to talk about it. In the intermediate process, the seeker opens other topics to steer the dialogue tactfully towards the goal. Such dialogue strategies, which we refer to as goal-hiding strategies, are common in diverse contexts such as criminal interrogations and medical assessments, involving sensitive topics. Conversely, in malicious online interactions like social media extortion, similar strategies might aim to manipulate individuals into revealing information or agreeing to unfavorable terms. This paper proposes a formal dialogue framework for understanding goal-hiding strategies. The dialogue framework uses Quantitative Bipolar Argumentation Frameworks (QBAFs) to assign willingness scores to topics. An initial willingness for each topic is modified by considering how topics promote (support) or demote (attack) other topics. We introduce a method to identify relations among topics by considering a respondent's shared information. Finally, we introduce a gradual semantics to estimate changes in willingness as new topics are opened. Our formal analysis and empirical evaluation show the system's compliance with privacy-preserving safety properties. A formal understanding of goal-hiding strategies opens up a range of practical applications; For instance, a seeker agent may plan with goal-hiding to enhance privacy in human-agent interactions. Similarly, an observer agent (third-party) may be designed to enhance social media security by detecting goal-hiding strategies employed by users' interlocutors.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Formal Dialogues, Quantitative Argumentation, Information Extraction, Human-Agent Interactions, Theory of Mind
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-231921 (URN)10.1016/j.ijar.2024.109325 (DOI)001360705100001 ()2-s2.0-85209369588 (Scopus ID)
Available from: 2024-11-18 Created: 2024-11-18 Last updated: 2024-12-06Bibliographically approved
Aler Tubella, A., Mora-Cantallops, M. & Nieves, J. C. (2024). How to teach responsible AI in Higher Education: challenges and opportunities. Ethics and Information Technology, 26(1), Article ID 3.
Open this publication in new window or tab >>How to teach responsible AI in Higher Education: challenges and opportunities
2024 (English)In: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 26, no 1, article id 3Article in journal (Refereed) Published
Abstract [en]

In recent years, the European Union has advanced towards responsible and sustainable Artificial Intelligence (AI) research, development and innovation. While the Ethics Guidelines for Trustworthy AI released in 2019 and the AI Act in 2021 set the starting point for a European Ethical AI, there are still several challenges to translate such advances into the public debate, education and practical learning. This paper contributes towards closing this gap by reviewing the approaches that can be found in the existing literature and by interviewing 11 experts across five countries to help define educational strategies, competencies and resources needed for the successful implementation of Trustworthy AI in Higher Education (HE) and to reach students from all disciplines. The findings are presented in the form of recommendations both for educators and policy incentives, translating the guidelines into HE teaching and practice, so that the next generation of young people can contribute to an ethical, safe and cutting-edge AI made in Europe.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
AI ethics, Educational strategies, Higher Education, Trustworthy AI
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-218640 (URN)10.1007/s10676-023-09733-7 (DOI)001122376900001 ()2-s2.0-85179628787 (Scopus ID)
Available from: 2023-12-27 Created: 2023-12-27 Last updated: 2025-04-24Bibliographically approved
Vossers, J., Brännström, A., Borglund, E., Hansson, J. & Nieves, J. C. (2024). Human-aware planning for situational awareness in indoor police interventions. In: Fabian Lorig; Jason Tucker; Adam Dahlgren Lindström; Frank Dignum; Pradeep Murukannaiah; Andreas Theodorou; Pınar Yolum (Ed.), HHAI 2024: hybrid human AI systems for the social good: proceedings of the third international conference on hybrid human-artificial intelligence. Paper presented at HHAI 2024, The third International Conference on Hybrid Human-Artificial Intelligence, Malmö, Sweden, June 10-14, 2024 (pp. 325-334). Amsterdam: IOS Press
Open this publication in new window or tab >>Human-aware planning for situational awareness in indoor police interventions
Show others...
2024 (English)In: HHAI 2024: hybrid human AI systems for the social good: proceedings of the third international conference on hybrid human-artificial intelligence / [ed] Fabian Lorig; Jason Tucker; Adam Dahlgren Lindström; Frank Dignum; Pradeep Murukannaiah; Andreas Theodorou; Pınar Yolum, Amsterdam: IOS Press, 2024, p. 325-334Conference paper, Published paper (Refereed)
Abstract [en]

Indoor interventions are among the most dangerous situations police officers have to deal with, mostly due to a lack of situational awareness. This work describes a planner that determines when to provide information, implemented in DLVK. It is based on the General Tactical Explanation Model, used by Swedish police during tactical interventions. The planner is envisioned to be integrated in an augmented reality tool to enhance officers’ situational awareness.

Place, publisher, year, edition, pages
Amsterdam: IOS Press, 2024
Series
Frontiers in Artificial Intelligence and Applications, ISSN 0922-6389, E-ISSN 1879-8314 ; 386
Keywords
situational awareness, human-aware planning, augmented reality
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-224476 (URN)10.3233/FAIA240205 (DOI)2-s2.0-85198716452 (Scopus ID)978-1-64368-522-9 (ISBN)
Conference
HHAI 2024, The third International Conference on Hybrid Human-Artificial Intelligence, Malmö, Sweden, June 10-14, 2024
Available from: 2024-05-17 Created: 2024-05-17 Last updated: 2024-07-22Bibliographically approved
Martín-Moncunill, D., Laredo, E. G. & Nieves, J. C. (2024). POTDAI: a tool to evaluate the perceived operational trust degree in artificial intelligence systems. IEEE Access, 12, 133097-133109
Open this publication in new window or tab >>POTDAI: a tool to evaluate the perceived operational trust degree in artificial intelligence systems
2024 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 12, p. 133097-133109Article in journal (Refereed) Published
Abstract [en]

There is evidence that a user’s subjective confidence in an Artificial Intelligence (AI)-based system is crucial in its use, even more decisive than the objective effectiveness and efficiency of the system.Therefore, different methods have been proposed for analyzing confidence in AI. In our research, we set out to evaluate how the degree of perceived trust in an AI system could affect a user’s final decision to follow AI recommendations. To this end, we established trustworthy criteria that such an evaluation should meet by following a co-creation approach with a multidisciplinary group of 10 experts. After a systematic review of3,204 articles, we found that none of the tools met the inclusion criteria. Thus, we introduce the so-called "Perceived Operational Trust Degree in AI” (POTDAI) tool that is based on the findings from the expert group and the literature analysis, with a methodology that adds rigor to that employed previously to create similar evaluation tools. We propose a short questionnaire for quick and easy application, inspired by the original version of the Technology Acceptance Model (TAM) with six Likert-type items. In this way, we also respond to the need pointed out by authors such as Vorm and Combs to extend the TAM to address questions related to user perception in systems with an AI component. Thus, POTDAI can be used alone or in combination with TAM to obtain additional information on its usefulness and ease of use.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2024
Keywords
Artificial intelligence, Cooperative systems, Human-computer interaction, Human factors, Trustworthy AI, Technology Acceptance Model
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-229436 (URN)10.1109/access.2024.3454061 (DOI)001327303900001 ()2-s2.0-85203541352 (Scopus ID)
Funder
EU, Horizon 2020, 952026
Available from: 2024-09-09 Created: 2024-09-09 Last updated: 2024-10-28Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4072-8795

Search in DiVA

Show all publications