Umeå universitets logga

umu.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Alternativa namn
Publikationer (10 of 99) Visa alla publikationer
Hellström, T. (2023). AI and its consequences for the written word. Frontiers in Artificial Intelligence, 6, Article ID 1326166.
Öppna denna publikation i ny flik eller fönster >>AI and its consequences for the written word
2023 (Engelska)Ingår i: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 6, artikel-id 1326166Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The latest developments of chatbots driven by Large Language Models (LLMs), more specifically ChatGPT, have shaken the foundations of how text is created, and may drastically reduce and change the need, ability, and valuation of human writing. Furthermore, our trust in the written word is likely to decrease, as an increasing proportion of all written text will be AI-generated – and potentially incorrect. In this essay, I discuss these implications and possible scenarios for us humans, and for AI itself.

Ort, förlag, år, upplaga, sidor
Frontiers Media S.A., 2023
Nyckelord
AI, ChatGPT, human writing, Large Language Models, LLM, societal impact, the written word
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-220014 (URN)10.3389/frai.2023.1326166 (DOI)001143412200001 ()38239498 (PubMedID)2-s2.0-85182451653 (Scopus ID)
Forskningsfinansiär
Vetenskapsrådet, 2022-04674
Tillgänglig från: 2024-01-29 Skapad: 2024-01-29 Senast uppdaterad: 2024-01-30Bibliografiskt granskad
Bensch, S., Sun, J., Bandera Rubio, J. P., Romero-Garcés, A. & Hellström, T. (2023). Personalised multi-modal communication for HRI. In: : . Paper presented at WARN workshop at the 32nd IEEE International Conference on Robot and Human Interactive Communication, RO-MAN, Busan, Korea, August 28-31, 2023.
Öppna denna publikation i ny flik eller fönster >>Personalised multi-modal communication for HRI
Visa övriga...
2023 (Engelska)Konferensbidrag, Enbart muntlig presentation (Refereegranskat)
Abstract [en]

One important aspect when designing understandable robots is how robots should communicate with a human user to be understood in the best way. In elder care applications this is particularly important, and also difficult since many older adults suffer from various kinds of impairments. In this paper we present a solution where communication modality and communication parameters are adapted to fit both a user profile and an environment model comprising information about light and sound conditions that may affect communication. The Rasa dialogue manager is complemented with necessary functionality, and the operation is verified with a Pepper robot interacting with several personas with impaired vision, hearing, and cognition. Several relevant ethical questions are identified and briefly discussed, as a contribution to the WARN workshop.

Nationell ämneskategori
Datavetenskap (datalogi) Robotteknik och automation
Identifikatorer
urn:nbn:se:umu:diva-214496 (URN)
Konferens
WARN workshop at the 32nd IEEE International Conference on Robot and Human Interactive Communication, RO-MAN, Busan, Korea, August 28-31, 2023
Tillgänglig från: 2023-09-19 Skapad: 2023-09-19 Senast uppdaterad: 2023-09-20Bibliografiskt granskad
Persiani, M. & Hellström, T. (2023). Policy regularization for legible behavior. Neural Computing & Applications, 35(23), 16781-16790
Öppna denna publikation i ny flik eller fönster >>Policy regularization for legible behavior
2023 (Engelska)Ingår i: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 35, nr 23, s. 16781-16790Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

In this paper we propose a method to augment a Reinforcement Learning agent with legibility. This method is inspired by the literature in Explainable Planning and allows to regularize the agent’s policy after training, and without requiring to modify its learning algorithm. This is achieved by evaluating how the agent’s optimal policy may produce observations that would make an observer model to infer a wrong policy. In our formulation, the decision boundary introduced by legibility impacts the states in which the agent’s policy returns an action that is non-legible because having high likelihood also in other policies. In these cases, a trade-off between such action, and legible/sub-optimal action is made. We tested our method in a grid-world environment highlighting how legibility impacts the agent’s optimal policy, and gathered both quantitative and qualitative results. In addition, we discuss how the proposed regularization generalizes over methods functioning with goal-driven policies, because applicable to general policies of which goal-driven policies are a special case.

Ort, förlag, år, upplaga, sidor
Springer, 2023
Nyckelord
Reinforcement Learning, Transparency, Interpretability, Legibility
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-192813 (URN)10.1007/s00521-022-07942-7 (DOI)000875293700002 ()2-s2.0-85140636891 (Scopus ID)
Anmärkning

Originally included in thesis in manuscript form.

Tillgänglig från: 2022-02-28 Skapad: 2022-02-28 Senast uppdaterad: 2023-12-05Bibliografiskt granskad
Edström, F., Hellström, T. & de Luna, X. (2023). Robot causal discovery aided by human interaction. In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN): . Paper presented at IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, Korea, August 28-31, 2023 (pp. 1731-1736). IEEE
Öppna denna publikation i ny flik eller fönster >>Robot causal discovery aided by human interaction
2023 (Engelska)Ingår i: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2023, s. 1731-1736Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Causality is relatively unexplored in robotics even if it is highly relevant, in several respects. In this paper, we study how a robot’s causal understanding can be improved by allowing the robot to ask humans causal questions. We propose a general algorithm for selecting direct causal effects to ask about, given a partial causal representation (using partially directed acyclic graphs, PDAGs) obtained from observational data. We propose three versions of the algorithm inspired by different causal discovery techniques, such as constraint-based, score-based, and interventions. We evaluate the versions in a simulation study and our results show that asking causal questions improves the causal representation over all simulated scenarios. Further, the results show that asking causal questions based on PDAGs discovered from data provides a significant improvement compared to asking questions at random, and the version inspired by score-based techniques performs particularly well over all simulated experiments.

Ort, förlag, år, upplaga, sidor
IEEE, 2023
Serie
IEEE RO-MAN proceedings, ISSN 1944-9445, E-ISSN 1944-9437
Nyckelord
human-robot-interaction (hri), causal discovery, causal inference
Nationell ämneskategori
Robotteknik och automation Datavetenskap (datalogi) Sannolikhetsteori och statistik
Identifikatorer
urn:nbn:se:umu:diva-219029 (URN)10.1109/RO-MAN57019.2023.10309376 (DOI)001108678600221 ()2-s2.0-85187012918 (Scopus ID)9798350336702 (ISBN)9798350336719 (ISBN)
Konferens
IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, Korea, August 28-31, 2023
Forskningsfinansiär
Vetenskapsrådet
Tillgänglig från: 2024-01-05 Skapad: 2024-01-05 Senast uppdaterad: 2024-03-18Bibliografiskt granskad
Hellström, T. & Bensch, S. (2022). Apocalypse now: no need for artificial general intelligence. AI & Society: The Journal of Human-Centred Systems and Machine Intelligence
Öppna denna publikation i ny flik eller fönster >>Apocalypse now: no need for artificial general intelligence
2022 (Engelska)Ingår i: AI & Society: The Journal of Human-Centred Systems and Machine Intelligence, ISSN 0951-5666, E-ISSN 1435-5655Artikel i tidskrift (Refereegranskat) Epub ahead of print
Ort, förlag, år, upplaga, sidor
Springer, 2022
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-198054 (URN)10.1007/s00146-022-01526-8 (DOI)000819880400002 ()2-s2.0-85133266192 (Scopus ID)
Tillgänglig från: 2022-07-14 Skapad: 2022-07-14 Senast uppdaterad: 2022-07-14
Bensch, S., Dignum, F. & Hellström, T. (2022). Increasing robot understandability through social practices. In: Proceedings of Cultu-Ro 2022, Workshop on Cultural Influences in Human-Robot Interaction: Today and Tomorrow: 31st IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 22). Paper presented at Ro-Man 2022, 31st IEEE International Conference on Robot and Human Interactive Communication, Naples, Italy, Aug 29 - September 2, 2022.
Öppna denna publikation i ny flik eller fönster >>Increasing robot understandability through social practices
2022 (Engelska)Ingår i: Proceedings of Cultu-Ro 2022, Workshop on Cultural Influences in Human-Robot Interaction: Today and Tomorrow: 31st IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 22), 2022Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this short paper we discuss how incorporatingsocial practices in robotics may contribute to how well humansunderstand robots’ actions and intentions. Since social practicestypically are applied by all interacting parties, also the robots’understanding of the humans may improve.We further discuss how the involved mechanisms have to beadjusted to fit the cultural context in which the interaction takesplace, and how social practices may have to be transformed tofit a robot’s capabilities and limitations.

Nationell ämneskategori
Teknik och teknologier Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:umu:diva-199557 (URN)
Konferens
Ro-Man 2022, 31st IEEE International Conference on Robot and Human Interactive Communication, Naples, Italy, Aug 29 - September 2, 2022
Tillgänglig från: 2022-09-20 Skapad: 2022-09-20 Senast uppdaterad: 2022-09-21Bibliografiskt granskad
Persiani, M. & Hellström, T. (2022). Informative communication of robot plans. In: Frank Dignum; Philippe Mathieu; Juan Manuel Corchado; Fernando De La Prieta (Ed.), Advances in practical applications of agents, multi-agent systems, and complex systems simulation: the PAAMS collection. Paper presented at 20th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2022), L'Aquila (Italy), 13-15 July, 2022 (pp. 332-344). Springer
Öppna denna publikation i ny flik eller fönster >>Informative communication of robot plans
2022 (Engelska)Ingår i: Advances in practical applications of agents, multi-agent systems, and complex systems simulation: the PAAMS collection / [ed] Frank Dignum; Philippe Mathieu; Juan Manuel Corchado; Fernando De La Prieta, Springer, 2022, s. 332-344Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Abstract [en]

When a robot is asked to verbalize its plan it can do it in many ways. For example, a seemingly natural strategy is incremental, where the robot verbalizes its planned actions in plan order. However, an important aspect of this type of strategy is that it misses considerations on what is effectively informative to communicate, because not considering what the user knows prior to explanations. In this paper we propose a verbalization strategy to communicate robot plans informatively, by measuring the information gain that verbalizations have against a second-order theory of mind of the user capturing his prior knowledge on the robot. As shown in our experiments, this strategy allows to understand the robot's goal much quicker than by using strategies such as increasing or decreasing plan order. In addition, following our formulation we hint to what is informative and why when a robot communicates its plan.

Ort, förlag, år, upplaga, sidor
Springer, 2022
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13616
Nyckelord
Bayesian network, Human-robot interaction, Mirror agent model, Plan verbalization
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:umu:diva-192809 (URN)10.1007/978-3-031-18192-4_27 (DOI)2-s2.0-85141818392 (Scopus ID)9783031181917 (ISBN)9783031181924 (ISBN)
Konferens
20th International Conference on Practical Applications of Agents and Multi-Agent Systems (PAAMS 2022), L'Aquila (Italy), 13-15 July, 2022
Anmärkning

Part of the book sub series: Lecture Notes in Artificial Intelligence (LNAI) 

Tillgänglig från: 2022-02-28 Skapad: 2022-02-28 Senast uppdaterad: 2022-12-15Bibliografiskt granskad
Persiani, M. & Hellström, T. (2022). The mirror agent model: a Bayesian architecture for interpretable agent behavior. In: Davide Calvaresi; Amro Najjar; Michael Winikoff; Kary Främling (Ed.), Explainable and transparent AI and multi-agent systems: 4th international workshop, EXTRAMAAS 2022, virtual event, may 9–10, 2022, revised selected papers. Paper presented at 4th International Workshop on EXplainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2022, Virtual event, May 9-10, 2022 (pp. 111-123). Springer Nature
Öppna denna publikation i ny flik eller fönster >>The mirror agent model: a Bayesian architecture for interpretable agent behavior
2022 (Engelska)Ingår i: Explainable and transparent AI and multi-agent systems: 4th international workshop, EXTRAMAAS 2022, virtual event, may 9–10, 2022, revised selected papers / [ed] Davide Calvaresi; Amro Najjar; Michael Winikoff; Kary Främling, Springer Nature, 2022, s. 111-123Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Abstract [en]

In this paper we illustrate a novel architecture generating interpretable behavior and explanations. We refer to this architecture as the Mirror Agent Model because it defines the observer model, that is the target of explicit and implicit communications, as a mirror of the agent's. With the goal of providing a general understanding of this work, we firstly show prior relevant results addressing the informative communication of agents intentions and the production of legible behavior. In the second part of the paper we furnish the architecture with novel capabilities for explanations through off-the-shelf saliency methods, followed by preliminary qualitative results.

Ort, förlag, år, upplaga, sidor
Springer Nature, 2022
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13283
Nyckelord
Interpretability, Explainability, Bayesian networks, Mirror Agent Model
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-194479 (URN)10.1007/978-3-031-15565-9_7 (DOI)000870042100007 ()2-s2.0-85140488434 (Scopus ID)978-3-031-15564-2 (ISBN)978-3-031-15565-9 (ISBN)
Konferens
4th International Workshop on EXplainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2022, Virtual event, May 9-10, 2022
Tillgänglig från: 2022-05-05 Skapad: 2022-05-05 Senast uppdaterad: 2022-11-10Bibliografiskt granskad
Persiani, M. & Hellström, T. (2021). Inference of the Intentions of Unknown Agents in a Theory of Mind Setting. In: Dignum F., Corchado J.M., De La Prieta F. (Ed.), Advances in Practical Applications of Agents, Multi-Agent Systems, and Social Good. The PAAMS Collection: . Paper presented at 19th International Conference on Practical Applications of Agents and Multi-Agent Systems, PAAMS 2021, Salamanca, Spain, October 6-8, 2021. (pp. 188-200). Springer Science+Business Media B.V.
Öppna denna publikation i ny flik eller fönster >>Inference of the Intentions of Unknown Agents in a Theory of Mind Setting
2021 (Engelska)Ingår i: Advances in Practical Applications of Agents, Multi-Agent Systems, and Social Good. The PAAMS Collection / [ed] Dignum F., Corchado J.M., De La Prieta F., Springer Science+Business Media B.V., 2021, s. 188-200Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Autonomous agents may be required to form an understanding of other agents for which they don’t possess a model. In such cases, they must rely on their previously gathered knowledge of agents, and ground the observed behaviors in the models this knowledge describes by theory of mind reasoning. To give flesh to this process, in this paper we propose an algorithm to ground observations on a combination of priorly possessed Belief-Desire-Intention models, while using rationality to infer unobservable variables. This allows to jointly infer beliefs, goals and intentions of an unknown observed agent by using only available models.

Ort, förlag, år, upplaga, sidor
Springer Science+Business Media B.V., 2021
Serie
International Conference on Practical Applications of Agents and Multi-Agent Systems, ISSN 03029743, E-ISSN 16113349
Nyckelord
Belief-desire-intention, Intent recognition, Planning domain description language, Theory of mind, Unknown agent model
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-188638 (URN)10.1007/978-3-030-85739-4_16 (DOI)000791045800016 ()2-s2.0-85116381724 (Scopus ID)9783030857387 (ISBN)
Konferens
19th International Conference on Practical Applications of Agents and Multi-Agent Systems, PAAMS 2021, Salamanca, Spain, October 6-8, 2021.
Anmärkning

Series: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), nr. 12946

Tillgänglig från: 2021-10-20 Skapad: 2021-10-20 Senast uppdaterad: 2023-09-05Bibliografiskt granskad
Ostovar, A., Bensch, S. & Hellström, T. (2021). Natural Language Guided Object Retrieval in Images. Acta Informatica, 58, 243-261
Öppna denna publikation i ny flik eller fönster >>Natural Language Guided Object Retrieval in Images
2021 (Engelska)Ingår i: Acta Informatica, ISSN 0001-5903, E-ISSN 1432-0525, Vol. 58, s. 243-261Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The ability to understand the surrounding environment and being able to communicate with interacting humans are important functionalities for many automated systems where visual input (e.g., images, video) and natural language input (speech or text) have to be related to each other. Possible applications are automatic image caption generation, interactive surveillance systems, or human robot interaction. In this paper, we propose algorithms for automatic responses to natural language queries about an image. Our approach uses a predefined neural net for detection of bounding boxes and objects in images, spatial relations between bounding boxes are modeled with a neural net, the queries are analyzed with a syntactic parser, and algorithms to map natural language to properties in the images are introduced. The algorithms make use of semantic similarity and antonyms. We evaluate the performance of our approach with test users assessing the quality of our system’s generated answers.

Ort, förlag, år, upplaga, sidor
Springer, 2021
Nyckelord
convolutional neural network, natural language grounding, object retrieval, spatial relations, semantic similarity
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:umu:diva-165065 (URN)10.1007/s00236-021-00400-2 (DOI)000674657100002 ()2-s2.0-85110811104 (Scopus ID)
Anmärkning

Previously included in thesis in manuscript form.

Tillgänglig från: 2019-11-08 Skapad: 2019-11-08 Senast uppdaterad: 2023-09-05Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-7242-2200

Sök vidare i DiVA

Visa alla publikationer