Umeå University's logo

umu.sePublications
Change search
Refine search result
1 - 5 of 5
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Coser, Omar
    et al.
    Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Rome, Italy; Unit of Advanced Robotics and Human-Centered Technologies, Università Campus Bio-Medico di Roma, Rome, Italy.
    Tamantini, Christian
    Unit of Advanced Robotics and Human-Centered Technologies, Università Campus Bio-Medico di Roma, Rome, Italy.
    Soda, Paolo
    Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics. Umeå University, Faculty of Medicine, Department of Diagnostics and Intervention. Unit of Computer Systems and Bioinformatics, Università Campus Bio-Medico di Roma, Rome, Italy.
    Zollo, Loredana
    Unit of Advanced Robotics and Human-Centered Technologies, Università Campus Bio-Medico di Roma, Rome, Italy.
    AI-based methodologies for exoskeleton-assisted rehabilitation of the lower limb: a review2024In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 11, article id 1341580Article, review/survey (Refereed)
    Abstract [en]

    Over the past few years, there has been a noticeable surge in efforts to design novel tools and approaches that incorporate Artificial Intelligence (AI) into rehabilitation of persons with lower-limb impairments, using robotic exoskeletons. The potential benefits include the ability to implement personalized rehabilitation therapies by leveraging AI for robot control and data analysis, facilitating personalized feedback and guidance. Despite this, there is a current lack of literature review specifically focusing on AI applications in lower-limb rehabilitative robotics. To address this gap, our work aims at performing a review of 37 peer-reviewed papers. This review categorizes selected papers based on robotic application scenarios or AI methodologies. Additionally, it uniquely contributes by providing a detailed summary of input features, AI model performance, enrolled populations, exoskeletal systems used in the validation process, and specific tasks for each paper. The innovative aspect lies in offering a clear understanding of the suitability of different algorithms for specific tasks, intending to guide future developments and support informed decision-making in the realm of lower-limb exoskeleton and AI applications.

    Download full text (pdf)
    fulltext
  • 2.
    Engwall, Olov
    et al.
    Division of Speech, Music and Hearing, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden.
    Bandera Rubio, Juan Pedro
    Departemento de Tecnología Electrónica, University of Málaga, Málaga, Spain.
    Bensch, Suna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Haring, Kerstin Sophie
    Robots and Sensors for the Human Well-Being, Ritchie School of Engineering and Computer Science, University of Denver, Denver, United States.
    Kanda, Takayuki
    HRI Lab, Kyoto University, Kyoto, Japan.
    Núñez, Pedro
    Tecnología de los Computadores y las Comunicaciones Department, University of Extremadura, Badajoz, Spain.
    Rehm, Matthias
    The Technical Faculty of IT and Design, Aalborg University, Aalborg, Denmark.
    Sgorbissa, Antonio
    Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi, University of Genoa, Genoa, Italy.
    Editorial: Socially, culturally and contextually aware robots2023In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 10, article id 1232215Article in journal (Other academic)
    Download full text (pdf)
    fulltext
  • 3. Rietz, Finn
    et al.
    Sutherland, Alexander
    Bensch, Suna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Wermter, Stefan
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    WoZ4U: An Open-Source Wizard-of-Oz Interface for Easy, Efficient and Robust HRI Experiments2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 668057Article in journal (Refereed)
    Abstract [en]

    Wizard-of-Oz experiments play a vital role in Human-Robot Interaction (HRI), as they allow for quick and simple hypothesis testing. Still, a publicly available general tool to conduct such experiments is currently not available in the research community, and researchers often develop and implement their own tools, customized for each individual experiment. Besides being inefficient in terms of programming efforts, this also makes it harder for non-technical researchers to conduct Wizard-of-Oz experiments. In this paper, we present a general and easy-to-use tool for the Pepper robot, one of the most commonly used robots in this context. While we provide the concrete interface for Pepper robots only, the system architecture is independent of the type of robot and can be adapted for other robots. A configuration file, which saves experiment-specific parameters, enables a quick setup for reproducible and repeatable Wizard-of-Oz experiments. A central server provides a graphical interface via a browser while handling the mapping of user input to actions on the robot. In our interface, keyboard shortcuts may be assigned to phrases, gestures, and composite behaviors to simplify and speed up control of the robot. The interface is lightweight and independent of the operating system. Our initial tests confirm that the system is functional, flexible, and easy to use. The interface, including source code, is made commonly available, and we hope that it will be useful for researchers with any background who want to conduct HRI experiments.

    Download full text (pdf)
    fulltext
  • 4.
    Tewari, Maitreyee
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Lindgren, Helena
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Expecting, understanding, relating, and interacting: older, middle-aged and younger adults' perspectives on breakdown situations in human–robot dialogues2022In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 9, article id 956709Article in journal (Refereed)
    Abstract [en]

    Purpose: The purpose of this study is to explore how older, middle aged and younger adults perceive breakdown situations caused by lack of or inconsistent knowledge, sudden focus shifts, and conflicting intentions in dialogues between a human and a socially intelligent robot in a home environment, and how they perceive strategies to manage breakdown situations.

    Methods: Scenarios embedding dialogues on health-related topics were constructed based on activity-theoretical and argumentation frameworks. Different reasons for breakdown situations and strategies to handle these were embedded. The scenarios were recorded in a Wizard-of-Oz setup, with a human actor and a Nao robot. Twenty participants between 23 and 72 years of age viewed the recordings and participated in semi-structured interviews conducted remotely. Data were analyzed qualitatively using thematic analysis.

    Results: Four themes relating to breakdown situations emerged: expecting, understanding, relating, and interacting. The themes span complex human activity at different complementary levels and provide further developed understanding of breakdown situations in human–robot interaction (HRI). Older and middle-aged adults emphasized emphatic behavior and adherence to social norms, while younger adults focused on functional aspects such as gaze, response time, and length of utterances. A hierarchical taxonomy of aspects relating to breakdown situations was formed, and design implications are provided, guiding future research.

    Conclusion: We conclude that a socially intelligent robot agent needs strategies to 1) construct and manage its understanding related to emotions of the human, social norms, knowledge, and motive on a higher level of meaningful human activity, 2) act accordingly, for instance, adhering to transparent social roles, and 3) resolve conflicting motives, and identify reasons to prevent and manage breakdown situations at different levels of collaborative activity. Furthermore, the novel methodology to frame the dynamics of human–robot dialogues in complex activities using Activity Theory and argumentation theory was instrumental in this work and will guide the future work on tailoring the robot's behavior.}

    Download full text (pdf)
    fulltext
  • 5.
    Winfield, Alan F. T.
    et al.
    Bristol Robotics Laboratory, UWE Bristol, Bristol, United Kingdom.
    Booth, Serena
    Computer Science and AI Laboratory (CSAIL), MIT, MA, Cambridge, United States.
    Dennis, Louise A.
    Department of Computer Science, University of Manchester, Manchester, United Kingdom.
    Egawa, Takashi
    NEC Corporation, Tokyo, Japan.
    Hastie, Helen
    Department of Computer Science, Heriot-Watt University, Edinburgh, United Kingdom.
    Jacobs, Naomi
    ImaginationLancaster, Lancaster Institute for Contemporary Arts, University of Lancaster, Lancaster, United Kingdom.
    Muttram, Roderick I.
    Fourth Insight Ltd, Ewhurst, United Kingdom.
    Olszewska, Joanna I.
    School of Computing and Engineering, University of the West of Scotland, Paisley, United Kingdom.
    Rajabiyazdi, Fahimeh
    Department of Mechanical and Industrial Engineering, University of Toronto, ON, Toronto, Canada.
    Theodorou, Andreas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Underwood, Mark A.
    Synchrony Financial, CT, Stamford, United States.
    Wortham, Robert H.
    Department of Electronic and Electrical Engineering, University of Bath, Bath, United Kingdom.
    Watson, Eleanor
    Nell Watson Ltd, Carrickfergus, United Kingdom.
    IEEE P7001: A Proposed Standard on Transparency2021In: Frontiers in Robotics and AI, E-ISSN 2296-9144, Vol. 8, article id 665729Article in journal (Refereed)
    Abstract [en]

    This paper describes IEEE P7001, a new draft standard on transparency of autonomous systems1. In the paper, we outline the development and structure of the draft standard. We present the rationale for transparency as a measurable, testable property. We outline five stakeholder groups: users, the general public and bystanders, safety certification agencies, incident/accident investigators and lawyers/expert witnesses, and explain the thinking behind the normative definitions of “levels” of transparency for each stakeholder group in P7001. The paper illustrates the application of P7001 through worked examples of both specification and assessment of fictional autonomous systems.

    Download full text (pdf)
    fulltext
1 - 5 of 5
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf