Umeå University's logo

umu.sePublications
Change search
Refine search result
1 - 15 of 15
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Lindgren, Helena
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kaelin, Vera C.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Ljusbäck, Ann Margreth
    Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Section of Occupational Therapy.
    Tewari, Maitreyee
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Persiani, Michele
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Nilsson, Ingeborg
    Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Section of Occupational Therapy.
    To adapt or not to adapt ? Older adults enacting agency in dialogues with an unknowledgeable agent2024In: UMAP '24: proceedings of the 32nd ACM conference on user modeling, adaptation and personalization, New York: Association for Computing Machinery (ACM), 2024, p. 307-316Conference paper (Refereed)
    Abstract [en]

    Health-promoting digital agents, taking on the role of an assistant, coach or companion, are expected to have knowledge about a person's medical and health aspects, yet they typically lack knowledge about the person's activities. These activities may vary daily or weekly and are contextually situated, posing challenges for the human-Agent interaction. This pilot study aimed to explore the experiences and behaviors of older adults when interacting with an initially unknowledgeable digital agent that queries them about an activity that they are simultaneously engaged in. Five older adults participated in a scenario involving preparing coffee followed by having coffee with a guest. While performing these activities, participants educated the smartwatch-embedded agent, named Virtual Occupational Therapist (VOT), about their activity performance by answering a set of activity-ontology based questions posed by the VOT. Participants' interactions with the VOT were observed, followed by a semi-structured interview focusing on their experience with the VOT. Collected data were analyzed using an activity-Theoretical framework. Results revealed participants exhibited agency and autonomy, deciding whether to adapt to the VOT's actions in three phases: Adjustment to the VOT, partial adjustment, and the exercise of agency by putting the VOT to sleep after the social conditions and activity changed. Results imply that the VOT should incorporate the ability to distinguish when humans collaborate as expected by the VOT and when they choose not to comply and instead act according to their own agenda. Future research focuses on how collaboration evolves and how the VOT needs to adapt in the process.

    Download full text (pdf)
    fulltext
  • 2.
    Persiani, Michele
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Computational models for intent recognition in robotic systems2020Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The ability to infer and mediate intentions has been recognized as a crucial task in recent robotics research, where it is agreed that robots are required to be equipped with intentional mechanisms in order to participate in collaborative tasks with humans.

    Reasoning about - or rather, perceiving - intentions enables robots to infer what other agents are doing, to communicate what are their plans, or to take proactive decisions. Intent recognition relates to several system requirements, such as the need of an enhanced collaboration mechanism in human-machine interactions, the need for adversarial technology in competitive scenarios, ambient intelligence, or predictive security systems.

    When attempting to describe what an intention is, agreement exists to represent it as a plan together with the goal it attempts to achieve. Being compatible with computer science concepts, this representation enables to handle intentions with methodologies based on planning, such as the Planning Domain Description Language or Hierarchical Task Networks.

    In this licentiate we describe how intentions can be processed using classical planning methods, with an eye also on newer technologies such as deep networks. Our goal is to study and define computational models that would allow robotic agents to infer, construct and mediate intentions. Additionally, we explore how intentions in the form of abstract plans can be grounded to sensorial data, and in particular we provide discussion on grounding over speech utterances and affordances, that correspond to the action possibilities offered by an environment.

    Download full text (pdf)
    fulltext
  • 3.
    Persiani, Michele
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Expressing and recognizing intentions2022Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    With the advancement of Artificial Intelligence, intelligent computer programs known as agents are coming increasingly close to the life of human beings. In an optimistic view, predictions tell us that agents will be governing many machines such as phones, robots and cars. Agents will play an important role in our daily life, and because of this, it is becoming more and more relevant to provide them the capacity of interacting with their human users without requiring prior expert training, bur rather supporting understanding through common sense reasoning. Striving towards this objective, one important aspect of agents design is intentionality, that relates to their capacity of understanding goals and plans of their users, and to make theirs understood. The ability to reason on goals and objectives of themselves and others is especially important if the agents are autonomous such as in autonomous robots, because enabling them to interact with other agents by relying on their sensor data and internal computations only, and not on explicit data provided by their designer. 

    Intentionality imbues agents with additional capacities supporting cooperative activities: if a helpful agent recognize an intention, it could proactively actuate interventions in an helpful way, such as giving advice. Alternatively, whenever the agent detects that its user is not understanding its objective it could communicate the mismatching information. As an illustrative example, let's consider the case of an autonomous car with a passenger on the road to his office. After checking the online maps the car identifies a traffic jam ahead, and to avoid it, autonomously decides to change its path mid-way by taking a less populated road. The behavior of the car is quite intelligent, however, if left unexplained, the change of intention would leave the passenger wandering  what is happening: he was on the main road to the office and suddenly the car turned left. This would at minimum force him to question the car what's going on. Rather, a continuous degree of understanding can be maintained if the car intelligently detect such mismatch of intentions by computing its passenger expectations, and thus preemptively communicate the new selected paths whenever required. 

    This seemingly simple process of communicating changes in the intention, looks simple but it is a quite difficult one. It requires to reason on what are the intentions of the user, and how and when they should be aligned with those of the car, either explicitly through a process of explanation, or implicitly through a behavior that is interpretable by human beings.  To support these capacities it is becoming apparent how intelligent agents should leverage how we commonly think about things, referred to as common sense reasoning. Common sense reasoning relates to how we form conjectures based on what we observe, and agents forming conjectures in the same way could be better collaborators rather than those reasoning in other ways. In this thesis we utilized an established model for common sense reasoning known as Theory of Mind, of which the thesis will contain a brief introduction. 

    By leveraging Theory of Mind and classical agent architectures, this thesis provides an initial formulation of a novel computational architecture capable of aggregating multiple tasks from intentionality, such as intent recognition, informative communication of intention and interpretable behavior. We refer to this model as the Mirror Agent Model, because envisioning the agent as interacting with a mirrored copy of itself whenever supporting intentionality. Inside the model expressing and recognizing intentions are two faces of the same coin and represent a dual to each other. This represents a step forward towards the unification in a single framework of many tasks related to intentionality, that are at the moment considered mostly independently inside the literature.

    The thesis will firstly provide introductory chapters on agents, intentions and theory of mind reasoning, followed by a chapter describing the developed computational models. This chapter conceptually aggregates many of the algorithms from the papers and aims at providing an initial formulation of the Mirror Agent Model. Finally, the thesis will conclude with a summary of contributions and concluding remarks.

    Download full text (pdf)
    fulltext
    Download (pdf)
    spikblad
    Download (jpg)
    presentationsbild
  • 4.
    Persiani, Michele
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Policy Regularization for Legible Behavior2021Conference paper (Other academic)
    Abstract [en]

    In Reinforcement Learning, legible behavior requires to maintain a policy that is easily discernable from a set of other policies. While legibility has been thoroughly addressed in Explainable Planning, little work exists in the Reinforcement Learning literature. As we propose in this paper, injecting legible behavior inside an agent's policy doesn't require to modify components of its learning algorithm. Rather, the agent's optimal policy can be regularized for legibility, by evaluating how the policy may produce observations that that would make an observer to infer an incorrect policy. In our formulation, the decision boundary introduced by legibility impacts the states in which the agent's policy returns an action that has high likelihood also in other policies. In these cases, a trade-off between such action, and legible/sub-optimal action occurs.

    Download full text (pdf)
    fulltext
  • 5.
    Persiani, Michele
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Cagatay, Odabasi
    Fraunhofer Institute of Technology, Stuttgart, Germany.
    Graf, Florenz
    Fraunhofer Institute of Technology, Stuttgart, Germany.
    Kalra, Mohit
    Fraunhofer Institute of Technology, Stuttgart, Germany.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Graf, Birgit
    Fraunhofer Institute of Technology.
    Traveling Drinksman: A mobile service robot for people in care-homes2020In: 52nd International Symposium on Robotics, ISR 2020, VDE Verlag GmbH, 2020, p. 31-36Conference paper (Other academic)
    Abstract [en]

    This paper describes ongoing work on the development of a service robot for serving drinks to people sitting at tables, for example in the recreation room of a care-house. The robot, denoted the Traveling Drinksman, should be able to detect theoccupied tables, navigate safely according to defined policies, and interact with the humans sitting to serve them a drink. We present initial results addressing all of these problems with different sub-modules, including numerical results for the human head detection module.

  • 6.
    Persiani, Michele
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Inference of the Intentions of Unknown Agents in a Theory of Mind Setting2021In: Advances in Practical Applications of Agents, Multi-Agent Systems, and Social Good. The PAAMS Collection / [ed] Dignum F., Corchado J.M., De La Prieta F., Springer Science+Business Media B.V., 2021, p. 188-200Conference paper (Refereed)
    Abstract [en]

    Autonomous agents may be required to form an understanding of other agents for which they don’t possess a model. In such cases, they must rely on their previously gathered knowledge of agents, and ground the observed behaviors in the models this knowledge describes by theory of mind reasoning. To give flesh to this process, in this paper we propose an algorithm to ground observations on a combination of priorly possessed Belief-Desire-Intention models, while using rationality to infer unobservable variables. This allows to jointly infer beliefs, goals and intentions of an unknown observed agent by using only available models.

  • 7.
    Persiani, Michele
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Informative communication of robot plans2022In: Advances in practical applications of agents, multi-agent systems, and complex systems simulation: the PAAMS collection / [ed] Frank Dignum; Philippe Mathieu; Juan Manuel Corchado; Fernando De La Prieta, Springer, 2022, p. 332-344Conference paper (Other academic)
    Abstract [en]

    When a robot is asked to verbalize its plan it can do it in many ways. For example, a seemingly natural strategy is incremental, where the robot verbalizes its planned actions in plan order. However, an important aspect of this type of strategy is that it misses considerations on what is effectively informative to communicate, because not considering what the user knows prior to explanations. In this paper we propose a verbalization strategy to communicate robot plans informatively, by measuring the information gain that verbalizations have against a second-order theory of mind of the user capturing his prior knowledge on the robot. As shown in our experiments, this strategy allows to understand the robot's goal much quicker than by using strategies such as increasing or decreasing plan order. In addition, following our formulation we hint to what is informative and why when a robot communicates its plan.

    Download full text (pdf)
    fulltext
  • 8.
    Persiani, Michele
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Intent Recognition from Speech and Plan Recognition2020In: Advances in Practical Applications of Agents, Multi-Agent Systems, and Trustworthiness: The PAAMS Collection / [ed] Yves Demazeau, Tom Holvoet, Juan M. Corchado, Stefania Costantini, Springer, 2020, p. 212-223Conference paper (Refereed)
    Abstract [en]

    In multi-agent systems, the ability to infer intentions allows artificial agents to act proactively and with partial information. In this paper we propose an algorithm to infer a speakers intentions with natural language analysis combined with plan recognition. We define a Natural Language Understanding component to classify semantic roles from sentences into partially instantiated actions, that are interpreted as the intention of the speaker. These actions are grounded to arbitrary, hand-defined task domains. Intent recognition with partial actions is statistically evaluated with several  planning domains. We then define a Human-Robot Interaction setting where both utterance classification and plan recognition are tested using a Pepper robot. We further address the issue of missing parameters in declared intentions and robot commands by leveraging the Principle of Rational Action, which is embedded in the plan recognition phase.

    Download full text (pdf)
    fulltext
  • 9.
    Persiani, Michele
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Policy regularization for legible behavior2023In: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 35, no 23, p. 16781-16790Article in journal (Refereed)
    Abstract [en]

    In this paper we propose a method to augment a Reinforcement Learning agent with legibility. This method is inspired by the literature in Explainable Planning and allows to regularize the agent’s policy after training, and without requiring to modify its learning algorithm. This is achieved by evaluating how the agent’s optimal policy may produce observations that would make an observer model to infer a wrong policy. In our formulation, the decision boundary introduced by legibility impacts the states in which the agent’s policy returns an action that is non-legible because having high likelihood also in other policies. In these cases, a trade-off between such action, and legible/sub-optimal action is made. We tested our method in a grid-world environment highlighting how legibility impacts the agent’s optimal policy, and gathered both quantitative and qualitative results. In addition, we discuss how the proposed regularization generalizes over methods functioning with goal-driven policies, because applicable to general policies of which goal-driven policies are a special case.

    Download full text (pdf)
    fulltext
  • 10.
    Persiani, Michele
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Probabilistic Plan Legibility with Off-the-shelf PlannersManuscript (preprint) (Other academic)
    Abstract [en]

    Legible planning is the creation of plans that best disambiguate their goals from a set of other candidates from an observer's perspective. In this paper we propose a method for legible planning for arbitrary PDDL domains, by extending previous research on legibility to classical planning without requiring to construct ad-hoc planners. We also discuss how the observer perspective may  be estimated through a second order theory of mind that connects the planner's and the observer's task spaces. Our solution can for example be deployed in human-robot teaming scenarios, where an autonomous robot in a team can implicitly communicate its goal by producing legible plans. We present benchmark results on several PDDL planning domains. Our results generally show that plan legibility is a trade-off with plan efficiency, however, not all planning domains allows to increase legibility in the same way and a regularizing factor to balance legibility and efficiency was proved necessary.

    Download full text (pdf)
    fulltext
  • 11.
    Persiani, Michele
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    The mirror agent model: a Bayesian architecture for interpretable agent behavior2022In: Explainable and transparent AI and multi-agent systems: 4th international workshop, EXTRAMAAS 2022, virtual event, may 9–10, 2022, revised selected papers / [ed] Davide Calvaresi; Amro Najjar; Michael Winikoff; Kary Främling, Springer Nature, 2022, p. 111-123Conference paper (Other academic)
    Abstract [en]

    In this paper we illustrate a novel architecture generating interpretable behavior and explanations. We refer to this architecture as the Mirror Agent Model because it defines the observer model, that is the target of explicit and implicit communications, as a mirror of the agent's. With the goal of providing a general understanding of this work, we firstly show prior relevant results addressing the informative communication of agents intentions and the production of legible behavior. In the second part of the paper we furnish the architecture with novel capabilities for explanations through off-the-shelf saliency methods, followed by preliminary qualitative results.

    Download full text (pdf)
    fulltext
  • 12.
    Persiani, Michele
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Unsupervised Inference of Object Affordance from Text Corpora2019In: Proceedings of the 22nd Nordic Conference on Computational Linguistics / [ed] Mareike Hartmann, Barbara Plank, Association for Computational Linguistics, 2019, article id W19-6112Conference paper (Refereed)
    Abstract [en]

    Affordances denote actions that can be performed in the presence of different objects, or possibility of action in an environment. In robotic systems, affordances and actions may suffer from poor semantic generalization capabilities due to the high amount of required hand-crafted specifications. To alleviate this issue, we propose a method to mine for object-action pairs in free text corpora, successively training and evaluating different prediction models of affordance based on word embeddings.

    Download full text (pdf)
    fulltext
  • 13.
    Persiani, Michele
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Tewari, Maitreyee
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Mediating joint intentions with a dialogue management system2020In: NeHuAI 2020 - first international workshop on new foundations for human-centered AI: proceedings of the first international workshop on new foundations for human-centered AI (NeHuAI) co-located with 24th European conference on artificial intelligence (ECAI 2020) / [ed] Alessandro Saffiotti; Luciano Serafini; Paul Lukowicz, RWTH Aachen University , 2020, p. 79-82Conference paper (Refereed)
    Abstract [en]

    A necessary skill which enables machines to take part in decision making processes with their users is the ability to participate in the mediation of joint intentions. This paper contains an initial formulation of an architecture to create and mediate joint intentions with an artificial agent. The architecture is based on a combination of plan recognition techniques to identify the user intention, and a Reinforcement Learning network which learns how to best interact with the inferred intention.

    Download full text (pdf)
    fulltext
  • 14.
    Tewari, Maitreyee
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Persiani, Michele
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Towards we-intentional human-robot interaction using theory of mind and hierarchical task network2021In: Proceedings of the 5th International Conference on Computer-Human Interaction Research and Applications - Volume 1: Humanoid / [ed] Hugo Plácido Silva, Larry Constantine, Andreas Holzinger, Sitepress Digital Library , 2021, p. 291-299Conference paper (Refereed)
    Abstract [en]

    Joint activity between human and robot agent requires them to not only form joint intention and share a mutual understanding about it but also to determine their type of commitment. Such commitment types allows robot agent to select appropriate strategies based on what can be expected from others involved in performing the given joint activity. This work proposes an architecture embedding commitments as we-intentional modes in a belief-desire-intention (BDI) based Theory of Mind (ToM) model. Dialogue mediation gathers observations facilitating ToM to infer the joint activity and hierarchical task network (HTN) plans the execution.The work is ongoing and currently the proposed architecture is being implemented to be evaluated during human-robot interaction studies.

  • 15.
    Tewari, Maitreyee
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Persiani, Michele
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Variational Autoencoding Dialogue Sub-structures Using a Novel Hierarchical Annotation Schema2020In: 2020 6th IEEE Congress on Information Science and Technology (CiSt) / [ed] Mohammed El Mohajir; Mohammed Al Achhab; Badr Eddine El Mohajir; Bernadetta Kwintiana Ane; Ismail Jellouli, Institute of Electrical and Electronics Engineers (IEEE), 2020, p. 334-341Conference paper (Refereed)
    Abstract [en]

    This work presents a novel method to extract sub-structures in dialogues for the following genres: human-human task driven, human-human chit-chat, human-machine task driven, and human-machine chit-chat dialogues. The model consists of a novel semi-supervised annotation schema of syntactic features, communicative functions, dialogue policy, sequence expansion and sender information. These labels are then transformed into tuples of three, four and five segments, the tuples are used as features and modelled to learn sub-structures in above mentioned genres of dialogues with sequence-to-sequence variational autoencoders. The results analyse the latent space of generic sub-structures decomposed by PCA and ICA, showing an increase in silhouette scores for clustering of the latent space.

1 - 15 of 15
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf