Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 71) Show all publications
Kumar, S., Edan, Y. & Bensch, S. (2026). Enhancing robot understandability: a model to estimate varying levels of discrepancy. International Journal of Social Robotics, 18, Article ID 19.
Open this publication in new window or tab >>Enhancing robot understandability: a model to estimate varying levels of discrepancy
2026 (English)In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 18, article id 19Article in journal (Refereed) Published
Abstract [en]

For the widespread deployment of robots in everyday tasks, robot’s actions, decisions and intentions must be understood by users. A fundamental factor affecting robot understandability is the underlying discrepancy between the robot’s and human’s state-of-minds. This paper contributes to the field of robot understandability by providing valuable insights into how discrepancy and robot understandability are connected, how human behavior indicates underlying discrepancy and how robots can use hidden Markov models to estimate varying discrepancy levels during interaction. We propose a systematic method to study human behavior indicators for assessing discrepancies. An exploratory study that involved 36 participants interacting with a robot revealed that the smaller the discrepancy level between robot and human, the more efficient and successful the interactions are, despite vague or short robot instructions. The findings of the exploratory study were used to implement and train hidden Markov models to estimate varying levels of discrepancy. With this model, a robot may continuously assess the discrepancy during an interaction and adapt its behavior aiming to decrease discrepancy.

Place, publisher, year, edition, pages
Springer Nature, 2026
Keywords
Understandability, State-of-mind, Discrepancy levels, Exploratory study, Hidden Markov model
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:umu:diva-249511 (URN)10.1007/s12369-025-01356-w (DOI)001682485600005 ()2-s2.0-105029503754 (Scopus ID)
Funder
Swedish Research Council, 2022–04674
Available from: 2026-02-05 Created: 2026-02-05 Last updated: 2026-02-18Bibliographically approved
Galeas, J., Tudela, A., Pons, Ó., Bensch, S., Hellström, T. & Bandera, A. (2025). Building a self-explanatory social robot on the basis of an explanation-oriented runtime knowledge model. Electronics, 14(16), Article ID 3178.
Open this publication in new window or tab >>Building a self-explanatory social robot on the basis of an explanation-oriented runtime knowledge model
Show others...
2025 (English)In: Electronics, E-ISSN 2079-9292, Vol. 14, no 16, article id 3178Article in journal (Refereed) Published
Abstract [en]

In recent years, there has been growing interest in developing robots capable of explaining their behavior, thereby improving their acceptance by humans with whom they share their environment. Proposed software designs are typically based on the advances being made in conversational systems built on deep learning techniques. However, apart from the ability to formulate explanations, the robot also needs an internal episodic memory, where it stores information from the continuous stream of experiences. Most previous proposals are designed to deal with short streams of episodic data (several minutes long). With the aim of managing larger experiences, we propose in this work a high-level episodic memory, where relevant events are abstracted to natural language concepts. The proposed framework is intimately linked to a software architecture in which the explanations, whether externalized or not, are shaped internally in a collaborative process involving the task-oriented software agents that make up the architecture. The core of this process is a runtime knowledge model, employed as working memory whose evolution allows for capturing the causal events stored in the episodic memory. We present several use cases that illustrate how the suggested framework allows an autonomous robot to generate correct and relevant explanations of its actions and behavior.

Place, publisher, year, edition, pages
MDPI, 2025
Keywords
explainability, robotics cognitive architecture, eXplainable Autonomous Robot
National Category
Robotics and automation
Identifiers
urn:nbn:se:umu:diva-243191 (URN)10.3390/electronics14163178 (DOI)001557466700001 ()2-s2.0-105014400692 (Scopus ID)
Funder
Swedish Research Council, 2022-04674European Regional Development Fund (ERDF), PID2022-137344OB-C3X
Available from: 2025-08-19 Created: 2025-08-19 Last updated: 2025-09-22Bibliographically approved
Mårell-Olsson, E., Bensch, S., Hellström, T., Alm, H., Hyllbrant, A., Leonardson, M. & Westberg, S. (2025). Navigating the human–robot interface: exploring human interactions and perceptions with social and telepresence robots. Applied Sciences, 15(3), Article ID 1127.
Open this publication in new window or tab >>Navigating the human–robot interface: exploring human interactions and perceptions with social and telepresence robots
Show others...
2025 (English)In: Applied Sciences, E-ISSN 2076-3417, Vol. 15, no 3, article id 1127Article in journal (Refereed) Published
Abstract [en]

This study investigates user experiences of interactions with two types of robots: Pepper, a social humanoid robot, and Double 3, a self-driving telepresence robot. Conducted in a controlled setting with a specific participant group, this research aims to understand how the design and functionality of these robots influence user perception, interaction patterns, and emotional responses. The findings reveal diverse participant reactions, highlighting the importance of adaptability, effective communication, autonomy, and perceived credibility in robot design. Participants showed mixed responses to human-like emotional displays and expressed a desire for robots capable of more nuanced and reliable behaviors. Trust in robots was influenced by their perceived functionality and reliability. Despite limitations in sample size, the study provides insights into the ethical and social considerations of integrating AI in public and professional spaces, offering guidance for enhancing user-centered designs and expanding applications for social and telepresence robots in society.

Place, publisher, year, edition, pages
MDPI, 2025
Keywords
human-robot interaction (HRI), social and telepresence robots, user experience, Pepper robot, Double 3 robot
National Category
Human Computer Interaction
Research subject
education
Identifiers
urn:nbn:se:umu:diva-234705 (URN)10.3390/app15031127 (DOI)001418413300001 ()2-s2.0-85217581022 (Scopus ID)
Available from: 2025-01-28 Created: 2025-01-28 Last updated: 2025-02-26Bibliographically approved
Galeas, J., Bensch, S., Hellström, T. & Bandera, A. (2025). Personalized causal explanations of a robot’s behavior. Frontiers in Robotics and AI, 12, Article ID 1637574.
Open this publication in new window or tab >>Personalized causal explanations of a robot’s behavior
2025 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 12, article id 1637574Article in journal (Refereed) Published
Abstract [en]

The deployment of robots in environments shared with humans implies that they must be able to justify or explain their behavior to nonexpert users when the user, or the situation itself, requires it. We propose a framework for robots to generate personalized explanations of their behavior by integrating cause-and-effect structures, social roles, and natural language queries. Robot events are stored as cause–effect pairs in a causal log. Given a human natural language query, the system uses machine learning to identify the matching cause-and-effect entry in the causal log and determine the social role of the inquirer. An initial explanation is generated and is then further refined by a large language model (LLM) to produce linguistically diverse responses tailored to the social role and the query. This approach maintains causal and factual accuracy while providing language variation in the generated explanations. Qualitative and quantitative experiments show that combining the causal information with the social role and the query when generating the explanations yields the most appreciated explanations.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2025
Keywords
explainable robots, understandable robots, personalized explanations, speaker role recognition, human–robot interaction, causal explanations
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-245529 (URN)10.3389/frobt.2025.1637574 (DOI)001596898600001 ()41132517 (PubMedID)2-s2.0-105019324571 (Scopus ID)
Available from: 2025-10-14 Created: 2025-10-14 Last updated: 2025-10-30Bibliographically approved
Hellström, T., Kaiser, N. & Bensch, S. (2024). A taxonomy of embodiment in the AI era. Electronics, 13, Article ID 4441.
Open this publication in new window or tab >>A taxonomy of embodiment in the AI era
2024 (English)In: Electronics, E-ISSN 2079-9292, Vol. 13, article id 4441Article in journal (Refereed) Published
Abstract [en]

This paper presents a taxonomy of agents’ embodiment in physical and virtual environments. It categorizes embodiment based on five entities: the agent being embodied, the possible mediator of the embodiment, the environment in which sensing and acting take place, the degree of body, and the intertwining of body, mind, and environment. The taxonomy is applied to a wide range of embodiment of humans, artifacts, and programs, including recent technological and scientific innovations related to virtual reality, augmented reality, telepresence, the metaverse, digital twins, and large language models. The presented taxonomy is a powerful tool to analyze, clarify, and compare complex cases of embodiment. For example, it makes the choice between a dualistic and non-dualistic perspective of an agent’s embodiment explicit and clear. The taxonomy also aided us to formulate the term “embodiment by proxy” to denote how seemingly non-embodied agents may affect the world by using humans as “extended arms”. We also introduce the concept “off-line embodiment” to describe large language models’ ability to create an illusion of human perception.

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
cognition, robotics, interaction, avatar, digital twins
National Category
Other Computer and Information Science
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-231747 (URN)10.3390/electronics13224441 (DOI)001364404300001 ()2-s2.0-85210288412 (Scopus ID)
Funder
Swedish Research Council, 2022-04674
Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2024-12-06Bibliographically approved
Hellström, T. & Bensch, S. (2024). Apocalypse now: no need for artificial general intelligence. AI & Society: Knowledge, Culture and Communication, 39, 811-813
Open this publication in new window or tab >>Apocalypse now: no need for artificial general intelligence
2024 (English)In: AI & Society: Knowledge, Culture and Communication, ISSN 0951-5666, E-ISSN 1435-5655, Vol. 39, p. 811-813Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Springer, 2024
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-198054 (URN)10.1007/s00146-022-01526-8 (DOI)000819880400002 ()2-s2.0-85133266192 (Scopus ID)
Available from: 2022-07-14 Created: 2022-07-14 Last updated: 2025-12-01Bibliographically approved
Bensch, S. & Hellström, T. (2024). Biased large language models for debating robots. In: : . Paper presented at ICSR'24, 16th International Conference on Social Robotics, Odense, Denmark, October 23-26, 2024.
Open this publication in new window or tab >>Biased large language models for debating robots
2024 (English)Conference paper, Oral presentation only (Other academic)
Abstract [en]

The recent development of large language models (LLMs) and improvements in speech recognition have made it realistic to envision AI-driven robots replace humans in debates, or even debating with each other. One application area is politics, where debating robots could support human politicians and, in principle, they could also debate on their own. However, political debating are highly complex and is often guided by values and ideologies, rather than rational decisions based on explicit facts.

In this paper we discuss how introducing appropriate bias into an LLM can be a way to accomplish this. As a simple proof of concept, we present a novel system of three robots conducting verbal debates on selectable topics. The robots are driven by the LLM GPT-3.5, and the desired political view, level of knowledge, and speaking style of each robot are configurable. The results demonstrate how LLMs may be used to argue both for and against different standpoints in debates, and how the output arguments depend on a programmed bias reflecting desired values and ideological principles.

Keywords
AI, politics, robots, cognition
National Category
Other Computer and Information Science
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-231703 (URN)
Conference
ICSR'24, 16th International Conference on Social Robotics, Odense, Denmark, October 23-26, 2024
Funder
Swedish Research Council, 2022-04674
Available from: 2024-11-11 Created: 2024-11-11 Last updated: 2024-11-12Bibliographically approved
Bensch, S. (Ed.). (2024). Proceedings of Umeå's 27th Student Conference in Computing: USCCS 2024. Paper presented at Umeå's 27th Student Conference in Computing (USCCS 2024). Umeå: Umeå University
Open this publication in new window or tab >>Proceedings of Umeå's 27th Student Conference in Computing: USCCS 2024
2024 (English)Conference proceedings (editor) (Refereed)
Abstract [en]

The Umeå Student Conference in Computing Science (USCCS) is an annual event organized as part of a course offered by the Department of Computing Science at Ume ̊a University. The primary aim of the course is to provide students with a hands-on introduction to independent research, scientific writing, and oral presentation. 

A student who participates in the course selects a topic in computing science and related areas and formulates a research question. The course revolves around three significant milestones. The first milestone requires students to write a lit- erature overview with an annotated bibliography, demonstrating not only their academic proficiency but also grounding their research into existing literature - standing on the shoulder of giants. The second milestone involves the actual research and its description in a scholarly manner, demonstrating a commitment to academic excellence, rigour and adherence to high standards. The third milestone encompasses the analysis and discussion of the obtained results, ensuring a thorough and objective examination. 

These three milestones are supported by three peer-review group meetings, consisting of 4-5 students each. During these sessions, each ongoing draft or milestone is efficiently and critically discussed aiming at guidance and improvement of the draft. This process provides valuable training in both giving and receiving constructive criticism. 

In addition, four lectures support the students’ learning and progress in the incremental development and refinement of a scientific paper, and timely discusssions on research ethics and quality. 

Each scientific paper is submitted to USCCS through EasyChair, an on-line submission system, and receives anonymous reviews from experts in the field. Based on the reviews and the editor’s assessment a decision of acceptance is made. Reviewers’ comments are incorporated, and the revised manuscripts undergo a final review before being included in these proceedings. The review process and conference format aim to simulate realistic settings for publishing processes and participation in scientific conferences. 

The conference is the highlight of the course, and this year, we received 14 submissions out of a possible 16, each thoroughly reviewed by experts listed on the following page. As a result, 8 submissions have been accepted for presentation at the conference. We extend our gratitude to the reviewers for their efforts within a tight timeframe and busy schedules. 

We also thank all authors for their dedication and outstanding final results, which will be presented during the conference. We wish all participants interesting exchange of ideas and stimulating discussions throughout USCCS. 

Place, publisher, year, edition, pages
Umeå: Umeå University, 2024. p. 117
Series
Report / UMINF, ISSN 0348-0542 ; 24.01
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-219054 (URN)
Conference
Umeå's 27th Student Conference in Computing (USCCS 2024)
Available from: 2024-01-07 Created: 2024-01-07 Last updated: 2024-01-08Bibliographically approved
Engwall, O., Bandera Rubio, J. P., Bensch, S., Haring, K. S., Kanda, T., Núñez, P., . . . Sgorbissa, A. (2023). Editorial: Socially, culturally and contextually aware robots. Frontiers in Robotics and AI, 10, Article ID 1232215.
Open this publication in new window or tab >>Editorial: Socially, culturally and contextually aware robots
Show others...
2023 (English)In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10, article id 1232215Article in journal, Editorial material (Other academic) Published
Place, publisher, year, edition, pages
Frontiers Media S.A., 2023
Keywords
context awareness, cultural awareness, original research, reviews-articles, social robots
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-217212 (URN)10.3389/frobt.2023.1232215 (DOI)2-s2.0-85177180605 (Scopus ID)
Available from: 2023-11-29 Created: 2023-11-29 Last updated: 2023-11-29Bibliographically approved
Bensch, S. & Eriksson, A. (2023). Mining multi-modal communication patterns in interaction with explainable and non-explainable robots. In: : . Paper presented at IEEE RO-MAN 2023, 32nd IEEE International conference on Robot and Human Interactive Communication; Workshop Human-Robot Interaction for Explainability in Robotics, Busan, Korea, August 28-31, 2023.
Open this publication in new window or tab >>Mining multi-modal communication patterns in interaction with explainable and non-explainable robots
2023 (English)Conference paper, Oral presentation only (Refereed)
Abstract [en]

We investigate interaction patterns for humans interacting with explainable and non-explainable robots. Non-explainable robots are here robots that do not explain their actions or non-actions, neither do they give any other feedbackduring interaction, in contrast to explainable robots. We video recorded and analyzed human behavior during a board game, where 20 humans verbally instructed either an explainable or non-explainable Pepper robot to move objects on the board. The transcriptions and annotations of the videos were transformed into transactions for association rule mining. Association rules discovered communication patterns in the interaction between the robots and the humans, and the most interesting rules were also tested with regular chi-square tests. Some statistically significant results are that there is a strong correlation between men and non-explainable robots and women and explainable robots, and that humans mirror some of the robot’s modality. Our results also show that it is important to contextualize human interaction patterns, and that this can be easily done using association rules as an investigative tool. The presented results are important when designing robots that should adapt their behavior to become understandable for the interacting humans.

National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-214433 (URN)
Conference
IEEE RO-MAN 2023, 32nd IEEE International conference on Robot and Human Interactive Communication; Workshop Human-Robot Interaction for Explainability in Robotics, Busan, Korea, August 28-31, 2023
Available from: 2023-09-14 Created: 2023-09-14 Last updated: 2024-08-28Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3248-3839

Search in DiVA

Show all publications