Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 93) Show all publications
Brännström, A., Wester, J. & Nieves, J. C. (2024). A formal understanding of computational empathy in interactive agents. Cognitive Systems Research, 85, Article ID 101203.
Open this publication in new window or tab >>A formal understanding of computational empathy in interactive agents
2024 (English)In: Cognitive Systems Research, ISSN 2214-4366, E-ISSN 1389-0417, Vol. 85, article id 101203Article in journal (Refereed) Published
Abstract [en]

Interactive software agents, such as chatbots, are progressively being used in the area of health and well-being. In such applications, where agents engage with users in interpersonal conversations for, e.g., coaching, comfort or behavior-change interventions, there is an increased need for understanding agents’ empathic capabilities. In the current state-of-the-art, there are no tools to do that. In order to understand empathic capabilities in interactive software agents, we need a precise notion of empathy. The literature discusses a variety of definitions of empathy, but there is no consensus of a formal definition. Based on a systematic literature review and a qualitative analysis of recent approaches to empathy in interactive agents for health and well-being, a formal definition—an ontology—of empathy is developed. We present the potential of the formal definition in a controlled user-study by applying it as a tool for assessing empathy in two state-of-the-art health and well-being chatbots; Replika and Wysa. Our findings suggest that our definition captures necessary conditions for assessing empathy in interactive agents, and how it can uncover and explain trends in changing perceptions of empathy over time. The definition, implemented in Web Ontology Language (OWL), may serve as an automated tool, enabling systems to recognize empathy in interactions—be it an interactive agent evaluating its own empathic performance or an intelligent system assessing the empathic capability of its interlocutors.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Computational empathy, Conversational agents, Human–agent interaction, Knowledge engineering
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-221035 (URN)10.1016/j.cogsys.2023.101203 (DOI)2-s2.0-85184060695 (Scopus ID)
Funder
Knut and Alice Wallenberg Foundation
Available from: 2024-03-05 Created: 2024-03-05 Last updated: 2024-03-05Bibliographically approved
Taverner, J., Brännström, A., Durães, D., Vivancos, E., Novais, P., Nieves, J. C. & Botti, V. (2024). Computational affective knowledge representation foragents located in a multicultural environment. Human-centric Computing and Information Sciences, 14, Article ID 30.
Open this publication in new window or tab >>Computational affective knowledge representation foragents located in a multicultural environment
Show others...
2024 (English)In: Human-centric Computing and Information Sciences, E-ISSN 2192-1962, Vol. 14, article id 30Article in journal (Refereed) Published
Abstract [en]

In this paper, we propose a new computational model for affective knowledge representation that will be used for affective agents located in a multicultural environment. To this end, we present the results of two experiments, the first of which determines the most appropriate labels to define the pleasure-arousal dimensions in the culture and language of the agent’s location. As an example, we use the Portuguese and Swedish languages. The second experiment identifies the most suitable values of pleasure-arousal dimensions for each emotion expressed in these example languages. The results obtained are compared with a previous model developed for agents interacting with European Spanish-speaking people. Results show significant differences in the values of pleasure and arousal associated with emotions across languages and cultures. The results also show no significant differences in gender or age when associating levels of pleasure-arousal to emotions. We propose two applications of these representation models, such as a model of an agent capable of adapting its affective behavior to different cultural environments and a human-aware planning scenario in which the agent uses this dimensional representation to recognize the user’s affective state and select the best strategy to redirect that affective state to the target state.

Place, publisher, year, edition, pages
Korea Computer Industry Association (KCIA), 2024
Keywords
Affective Computing, Human Emotion Modeling, Human-Machine Interaction, Affective Agents, Emotion Representation, Cross-Cultural Emotion Representation
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-227803 (URN)10.22967/HCIS.2024.14.030 (DOI)001206297900001 ()
Funder
EU, Horizon 2020, 952215
Available from: 2024-07-10 Created: 2024-07-10 Last updated: 2024-07-10Bibliographically approved
Nieves, J. C., Osorio, M., Rojas-Velazquez, D., Magallanes, Y. & Brännström, A. (2024). Digital companions for well-being: challenges and opportunities. Journal of Intelligent & Fuzzy Systems
Open this publication in new window or tab >>Digital companions for well-being: challenges and opportunities
Show others...
2024 (English)In: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967Article in journal (Refereed) Epub ahead of print
Abstract [en]

Humans have evolved to seek social connections, extending beyond interactions with living beings. The digitization of society has led to interactions with non-living entities, such as digital companions, aimed at supporting mental well-being. This literature review surveys the latest developments in digital companions for mental health, employing a hybrid search strategy that identified 67 relevant articles from 2014 to 2022. We identified that by the nature of the digital companions’ purposes, it is important to consider person profiles for: a) to generate both person-oriented and empathetic responses from these virtual companions, b) to keep track of the person’s conversations, activities, therapy, and progress, and c) to allow portability and compatibility between digital companions. We established a taxonomy for digital companions in the scope of mental well-being. We also identified open challenges in the scope of digital companions related to ethical, technical, and socio-technical points of view. We provided documentation about what these issues mean, and discuss possible alternatives to approach them.

Keywords
Digital Companions, autonomous systems, Mental Well-being
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-227805 (URN)10.3233/jifs-219336 (DOI)
Available from: 2024-07-10 Created: 2024-07-10 Last updated: 2024-07-10
Bahuguna, A., Haydar, S., Brännström, A. & Nieves, J. C. (2024). Do datapoints argue?: Argumentation for hierarchical agreement in datasets. In: Sławomir Nowaczyk; Przemysław Biecek; Neo Christopher Chung; Mauro Vallati; Paweł Skruch; Joanna Jaworek-Korjakowska; Simon Parkinson; Alexandros Nikitas; Martin Atzmüller; Tomáš Kliegr; Ute Schmid; Szymon Bobek; Nada Lavrac; Marieke Peeters; Roland van Dierendonck; Saskia Robben; Eunika Mercier-Laurent; Gülgün Kayakutlu; Mieczyslaw Lech Owoc; Karl Mason; Abdul Wahid, Pierangela Bruno; Francesco Calimeri; Francesco Cauteruccio; Giorgio Terracina; Diedrich Wolter; Jochen L. Leidner; Michael Kohlhase; Vania Dimitrova (Ed.), Artificial Intelligence. ECAI 2023 International Workshops: XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, Kraków, Poland, September 30 – October 4, 2023, Proceedings, Part II. Paper presented at 2nd International Workshop on HYbrid Models for Coupling Deductive and Inductive ReAsoning (Hydra) @ ECAI-23, Kraków, Poland, September 30 - October 4, 2023 (pp. 291-303). Springer
Open this publication in new window or tab >>Do datapoints argue?: Argumentation for hierarchical agreement in datasets
2024 (English)In: Artificial Intelligence. ECAI 2023 International Workshops: XAI^3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, Kraków, Poland, September 30 – October 4, 2023, Proceedings, Part II / [ed] Sławomir Nowaczyk; Przemysław Biecek; Neo Christopher Chung; Mauro Vallati; Paweł Skruch; Joanna Jaworek-Korjakowska; Simon Parkinson; Alexandros Nikitas; Martin Atzmüller; Tomáš Kliegr; Ute Schmid; Szymon Bobek; Nada Lavrac; Marieke Peeters; Roland van Dierendonck; Saskia Robben; Eunika Mercier-Laurent; Gülgün Kayakutlu; Mieczyslaw Lech Owoc; Karl Mason; Abdul Wahid, Pierangela Bruno; Francesco Calimeri; Francesco Cauteruccio; Giorgio Terracina; Diedrich Wolter; Jochen L. Leidner; Michael Kohlhase; Vania Dimitrova, Springer, 2024, p. 291-303Conference paper, Published paper (Refereed)
Abstract [en]

This work aims to utilize quantitative bipolar argumentation to detect deception in machine learning models. We explore the concept of deception in the context of interactions of a party developing a machine learning model with potentially malformed data sources. The objective is to identify deceptive or adversarial data and assess the effectiveness of comparative analysis during different stages of model training. By modeling disagreement and agreement between data points as arguments and utilizing quantitative measures, this work proposes techniques for detecting outliers in data. We discuss further applications in clustering and uncertainty modelling.

Place, publisher, year, edition, pages
Springer, 2024
Series
Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937 ; 1948
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-216004 (URN)10.1007/978-3-031-50485-3_31 (DOI)2-s2.0-85184303025 (Scopus ID)978-3-031-50484-6 (ISBN)978-3-031-50485-3 (ISBN)
Conference
2nd International Workshop on HYbrid Models for Coupling Deductive and Inductive ReAsoning (Hydra) @ ECAI-23, Kraków, Poland, September 30 - October 4, 2023
Available from: 2023-10-30 Created: 2023-10-30 Last updated: 2024-02-21Bibliographically approved
Aler Tubella, A., Mora-Cantallops, M. & Nieves, J. C. (2024). How to teach responsible AI in Higher Education: challenges and opportunities. Ethics and Information Technology, 26(1), Article ID 3.
Open this publication in new window or tab >>How to teach responsible AI in Higher Education: challenges and opportunities
2024 (English)In: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 26, no 1, article id 3Article in journal (Refereed) Published
Abstract [en]

In recent years, the European Union has advanced towards responsible and sustainable Artificial Intelligence (AI) research, development and innovation. While the Ethics Guidelines for Trustworthy AI released in 2019 and the AI Act in 2021 set the starting point for a European Ethical AI, there are still several challenges to translate such advances into the public debate, education and practical learning. This paper contributes towards closing this gap by reviewing the approaches that can be found in the existing literature and by interviewing 11 experts across five countries to help define educational strategies, competencies and resources needed for the successful implementation of Trustworthy AI in Higher Education (HE) and to reach students from all disciplines. The findings are presented in the form of recommendations both for educators and policy incentives, translating the guidelines into HE teaching and practice, so that the next generation of young people can contribute to an ethical, safe and cutting-edge AI made in Europe.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
AI ethics, Educational strategies, Higher Education, Trustworthy AI
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-218640 (URN)10.1007/s10676-023-09733-7 (DOI)2-s2.0-85179628787 (Scopus ID)
Available from: 2023-12-27 Created: 2023-12-27 Last updated: 2023-12-27Bibliographically approved
Vossers, J., Brännström, A., Borglund, E., Hansson, J. & Nieves, J. C. (2024). Human-aware planning for situational awareness in indoor police interventions. Paper presented at HHAI 2024, The third International Conference on Hybrid Human-Artificial Intelligence, Malmö, Sweden, June 10-14, 2024. Frontiers in Artificial Intelligence and Applications
Open this publication in new window or tab >>Human-aware planning for situational awareness in indoor police interventions
Show others...
2024 (English)In: Frontiers in Artificial Intelligence and Applications, ISSN 0922-6389, E-ISSN 1879-8314Article in journal (Refereed) Accepted
Abstract [en]

Indoor interventions are among the most dangerous situations police officers have to deal with, mostly due to a lack of situational awareness. This work describes a planner that determines when to provide information, implemented in DLV$^K$. It is based on the General Tactical Explanation Model, used by Swedish police during tactical interventions. The planner is envisioned to be integrated in an augmented reality tool to enhance officers' situational awareness.

Keywords
situational awareness, human-aware planning, augmented reality
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-224476 (URN)
Conference
HHAI 2024, The third International Conference on Hybrid Human-Artificial Intelligence, Malmö, Sweden, June 10-14, 2024
Available from: 2024-05-17 Created: 2024-05-17 Last updated: 2024-05-20
Brännström, A. & Nieves, J. C. (2024). Towards control in agents for human behavior change: an autism case. Journal of Intelligent & Fuzzy Systems
Open this publication in new window or tab >>Towards control in agents for human behavior change: an autism case
2024 (English)In: Journal of Intelligent & Fuzzy Systems, ISSN 1064-1246, E-ISSN 1875-8967Article in journal (Refereed) Epub ahead of print
Abstract [en]

This paper introduces an automated decision-making framework for providing controlled agent behavior in systems dealing with human behavior-change. Controlled behavior in such settings is important in order to reduce unexpected side-effects of a system’s actions. The general structure of the framework is based on a psychological theory, the Theory of Planned Behavior (TPB), capturing causes to human motivational states, which enables reasoning about dynamics of human motivation. The framework consists of two main components: 1) an ontological knowledge-base that models an individual’s behavioral challenges to infer motivation states and 2) a transition system that, in a given motivation state, decides on motivational support, resulting in transitions between motivational states. The system generates plans (sequences of actions) for an agent to facilitate behavior change. A particular use-case is modeled regarding children with Autism Spectrum Conditions (ASC) who commonly experience difficulties in everyday social situations. An evaluation of a proof-of-concept prototype is performed that presents consistencies between ASC experts’ suggestions and plans generated by the system.

Keywords
Knowledge-based systems, Automated reasoning, Autism Spectrum Conditions, theory of Planned Behavio
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-227804 (URN)10.3233/jifs-219335 (DOI)
Available from: 2024-07-10 Created: 2024-07-10 Last updated: 2024-07-10
Brännström, A., Dignum, V. & Nieves, J. C. (2023). A formal framework for deceptive topic planning in information-seeking dialogues. In: AAMAS '23: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems. Paper presented at AAMAS 2023, the 22nd International Conference on Autonomous Agents and Multiagent Systems, London, United Kingdom, May 29 – June 2, 2023 (pp. 2376-2378).
Open this publication in new window or tab >>A formal framework for deceptive topic planning in information-seeking dialogues
2023 (English)In: AAMAS '23: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, 2023, p. 2376-2378Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

This paper introduces a formal framework for goal-hiding information-seeking dialogues to deal with interactions where a seeker agent estimates a human respondent to not be willing to share the sought-for information. Hence, the seeker postpones (hides) a sensitive goal topic until the respondent is perceived willing to talk about it. This regards a type of deceptive strategy to withhold information, e.g., a sensitive question, that, in a given dialogue state, may be harmful to a respondent, e.g., by violating privacy. The framework uses Quantitative Bipolar Argumentation Frameworks to assign willingness scores to topics, inferred from a respondent's asserted beliefs. A gradual semantics is introduced to handle changes in willingness scores based on relations among topics. The goal-hiding dialogue process is illustrated using an example inspired by primary healthcare nurses' strategies for collecting sensitive health information from patients.

Keywords
Formal dialogues, Formal argumentation, Knowledge extraction, Non-collaborative agents, Machine deception
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-208549 (URN)2-s2.0-85171306289 (Scopus ID)978-1-4503-9432-1 (ISBN)
Conference
AAMAS 2023, the 22nd International Conference on Autonomous Agents and Multiagent Systems, London, United Kingdom, May 29 – June 2, 2023
Available from: 2023-05-26 Created: 2023-05-26 Last updated: 2023-09-25Bibliographically approved
Morveli Espinoza, M., Nieves, J. C. & Tacla, C. A. (2023). A gradual semantics with imprecise probabilities for support argumentation frameworks. In: Kai Sauerwald; Matthias Thimm (Ed.), NMR 2023. International Workshop on Non-Monotonic Reasoning 2023: Proceedings of the 21st International Workshop on Non-Monotonic Reasoningco-located with the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR 2023) and co-located with the 36th International Workshop on Description Logics (DL 2023). Paper presented at NMR 2023, The 21st International Workshop on Non-Monotonic Reasoning, Rhodes, Greece, September 2-4, 2023 (pp. 84-93). CEUR-WS, Article ID 9.
Open this publication in new window or tab >>A gradual semantics with imprecise probabilities for support argumentation frameworks
2023 (English)In: NMR 2023. International Workshop on Non-Monotonic Reasoning 2023: Proceedings of the 21st International Workshop on Non-Monotonic Reasoningco-located with the 20th International Conference on Principles of Knowledge Representation and Reasoning (KR 2023) and co-located with the 36th International Workshop on Description Logics (DL 2023) / [ed] Kai Sauerwald; Matthias Thimm, CEUR-WS , 2023, p. 84-93, article id 9Conference paper, Published paper (Refereed)
Abstract [en]

Support Argumentation Frameworks (SAFs) are a type of the Abstract Argumentation Framework, where the interactions between arguments have a positive nature. A quantitative way of evaluating the arguments in a SAF is by applying a gradual semantics, which assigns a numerical value to each argument with the aim of ranking or evaluate them. In the literature,studied gradual semantics determine precise probability values; however, in many applications there is the necessity of imprecise evaluations which consider a range of values for assessing an argument. Thus, the first contribution of this article is an imprecise gradual semantics (IGS) based on credal networks theory. The second contribution is a set of properties for evaluating IGSs, which extend some properties proposed for precise gradual semantics. Besides, we suggest a classification of semantics considering the set of properties and evaluate our proposed IGS according to the extended properties. Finally, the practical application of the results is discussed by using an example from Network Science, i.e, PageRank. We also discuss how gradual semantics benefit PageRank research by allowing to generate contrastive explanations about the scores in a more natural way.

Place, publisher, year, edition, pages
CEUR-WS, 2023
Series
CEUR Workshop proceedings, ISSN 1613-0073 ; 3464
Keywords
Support argumentation framework, Formal argumentation, Gradual semantics, Impreciseness, PageRank
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-214388 (URN)2-s2.0-85171621561 (Scopus ID)
Conference
NMR 2023, The 21st International Workshop on Non-Monotonic Reasoning, Rhodes, Greece, September 2-4, 2023
Available from: 2023-09-13 Created: 2023-09-13 Last updated: 2023-10-23Bibliographically approved
Aler Tubella, A., Coelho Mollo, D., Dahlgren, A., Devinney, H., Dignum, V., Ericson, P., . . . Nieves, J. C. (2023). ACROCPoLis: a descriptive framework for making sense of fairness. In: FAccT '23: Proceedings of the 2023 ACM conference on fairness, accountability, and transparency. Paper presented at 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, Illinois, USA, June 12-15, 2023 (pp. 1014-1025). ACM Digital Library
Open this publication in new window or tab >>ACROCPoLis: a descriptive framework for making sense of fairness
Show others...
2023 (English)In: FAccT '23: Proceedings of the 2023 ACM conference on fairness, accountability, and transparency, ACM Digital Library, 2023, p. 1014-1025Conference paper, Published paper (Refereed)
Abstract [en]

Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities. We therefore want to take the focus away from definitions and allow for the inclusion of societal and relational aspects to represent how the effects of AI systems impact and are experienced by individuals and social groups. In this paper, we do this by means of proposing the ACROCPoLis framework to represent allocation processes with a modeling emphasis on fairness aspects. The framework provides a shared vocabulary in which the factors relevant to fairness assessments for different situations and procedures are made explicit, as well as their interrelationships. This enables us to compare analogous situations, to highlight the differences in dissimilar situations, and to capture differing interpretations of the same situation by different stakeholders.

Place, publisher, year, edition, pages
ACM Digital Library, 2023
Keywords
Algorithmic fairness; socio-technical processes; social impact of AI; responsible AI
National Category
Information Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-209705 (URN)10.1145/3593013.3594059 (DOI)2-s2.0-85163594710 (Scopus ID)978-1-4503-7252-7 (ISBN)
Conference
2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, Illinois, USA, June 12-15, 2023
Available from: 2023-06-13 Created: 2023-06-13 Last updated: 2023-07-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4072-8795

Search in DiVA

Show all publications