Umeå University's logo

umu.sePublications
Change search
Refine search result
12 1 - 50 of 99
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abedin, Md Reaz Ashraful
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bensch, Suna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Self-supervised language grounding by active sensing combined with Internet acquired images and text2017In: Proceedings of the Fourth International Workshop on Recognition and Action for Scene Understanding (REACTS2017) / [ed] Jorge Dias George Azzopardi, Rebeca Marf, Málaga: REACTS , 2017, p. 71-83Conference paper (Refereed)
    Abstract [en]

    For natural and efficient verbal communication between a robot and humans, the robot should be able to learn names and appearances of new objects it encounters. In this paper we present a solution combining active sensing of images with text based and image based search on the Internet. The approach allows the robot to learn both object name and how to recognise similar objects in the future, all self-supervised without human assistance. One part of the solution is a novel iterative method to determine the object name using image classi- fication, acquisition of images from additional viewpoints, and Internet search. In this paper, the algorithmic part of the proposed solution is presented together with evaluations using manually acquired camera images, while Internet data was acquired through direct and reverse image search with Google, Bing, and Yandex. Classification with multi-classSVM and with five different features settings were evaluated. With five object classes, the best performing classifier used a combination of Pyramid of Histogram of Visual Words (PHOW) and Pyramid of Histogram of Oriented Gradient (PHOG) features, and reached a precision of 80% and a recall of 78%.

    Download full text (pdf)
    fulltext
  • 2.
    Ali, W
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Georgsson, Fredrik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Visual tree detection for autonomous navigation in forest environment2008In: IEEE Intelligent Vehicles SymposiumConference Location: Eindhoven, NETHERLANDS, 2008, , p. 1144-1149p. 1144-1149Conference paper (Refereed)
    Abstract [en]

    This paper describes a classification based tree detection method for autonomous navigation of forest vehicles in forest environment. Fusion of color, and texture cues has been used to segment the image into tree trunk and background objects. The segmentation of images into tree trunk and background objects is a challenging task due to high variations of illumination, effect of different color shades, non-homogeneous bark texture, shadows and foreshortening. To accomplish this, the approach has been to find the best combinations of color, and texture descriptors, and classification techniques. An additional task has been to estimate the distance between forest vehicle and the base of segmented trees using monocular vision. A simple heuristic distance measurement method is proposed that is based on pixel height and a reference width. The performance of various color and texture operators, and accuracy of classifiers has been evaluated using cross validation techniques.

  • 3.
    Arad, Boaz
    et al.
    Department of Computer Science, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Balendonck, Jos
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Barth, Ruud
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Ben-Shahar, Ohad
    Department of Computer Science, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hemming, Jochen
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Tielen, Toon
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    van Tuijl, Bart
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Development of a sweet pepper harvesting robot2020In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 37, no 6, p. 1027-1039Article in journal (Refereed)
    Abstract [en]

    This paper presents the development, testing and validation of SWEEPER, a robot for harvesting sweet pepper fruit in greenhouses. The robotic system includes a six degrees of freedom industrial arm equipped with a specially designed end effector, RGB-D camera, high-end computer with graphics processing unit, programmable logic controllers, other electronic equipment, and a small container to store harvested fruit. All is mounted on a cart that autonomously drives on pipe rails and concrete floor in the end-user environment. The overall operation of the harvesting robot is described along with details of the algorithms for fruit detection and localization, grasp pose estimation, and motion control. The main contributions of this paper are the integrated system design and its validation and extensive field testing in a commercial greenhouse for different varieties and growing conditions. A total of 262 fruits were involved in a 4-week long testing period. The average cycle time to harvest a fruit was 24 s. Logistics took approximately 50% of this time (7.8 s for discharge of fruit and 4.7 s for platform movements). Laboratory experiments have proven that the cycle time can be reduced to 15 s by running the robot manipulator at a higher speed. The harvest success rates were 61% for the best fit crop conditions and 18% in current crop conditions. This reveals the importance of finding the best fit crop conditions and crop varieties for successful robotic harvesting. The SWEEPER robot is the first sweet pepper harvesting robot to demonstrate this kind of performance in a commercial greenhouse.

    Download full text (pdf)
    fulltext
  • 4. Arafat, Yeasin
    et al.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Rashid, Jayedur
    Parameterized sensor model and an approach for measuring goodness of robotic maps2010In: Proceedings of the 15th IASTED International Conference on Robotics and Applications (RA 2010), ACTA Press, 2010Conference paper (Refereed)
    Abstract [en]

    Map building is a classical problem in mobile and au tonomous robotics, and sensor models is a way to interpret raw sensory information, especially for building maps. In this paper we propose a parameterized sensor model, and optimize map goodness with respect to these parameters. A new approach, measuring the goodness of maps without a handcrafted map of the actual environment is introduced and evaluated. Three different techniques; statistical anal ysis, derivative of images, and comparison of binary maps have been used as estimates of map goodness. The results show that the proposed sensor model generates better maps than a standard sensor model. However, the proposed ap proach of measuring goodness of maps does not improve the results as much as expected.

  • 5.
    Athanassiadis, Dimitris
    et al.
    Dept. of Forest Resource Management, Swedish University of Agricultural Sciences.
    Bergström, Dan
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Lindroos, Ola
    Dept. of Forest Resource Management, Swedish University of Agricultural Sciences.
    Nordfjell, Tomas
    Dept. of Forest Resource Management, Swedish University of Agricultural Sciences.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Path tracking for autonomous forwarders in forest terrain2010In: Precision Forestry Symposium: developments in Precision Forestry since 2006 / [ed] Ackerman P A, Ham H, & Lu C, 2010, p. 42-43Conference paper (Refereed)
    Download full text (pdf)
    FULLTEXT02
  • 6.
    Baranwal, Neha
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Singh, Avinash
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Fusion of Gesture and Speech for Increased Accuracy in Human Robot Interaction2019In: 2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR), IEEE, 2019, p. 139-144Conference paper (Refereed)
    Abstract [en]

    An approach for decision-level fusion for gesture and speech based human-robot interaction (HRI) is proposed. A rule-based method is compared with several machine learning approaches. Gestures and speech signals are initially classified using hidden Markov models, reaching accuracies of 89.6% and 84% respectively. The rule-based approach reached 91.6% while SVM, which was the best of all evaluated machine learning algorithms, reached an accuracy of 98.2% on the test data. A complete framework is deployed in real time humanoid robot (NAO) which proves the efficacy of the system.

  • 7.
    Barth, Ruud
    et al.
    Greenhouse Horticulture, Wageningen University & Research Center.
    Baur, Jörg
    Institute of Applied Mechanics, Technische Universität München.
    Buschmann, Thomas
    Institute of Applied Mechanics, Technische Universität München.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Nguyen, Thanh
    KU Leuven, Department of Biosystems.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Saeys, Wouter
    KU Leuven, Department of Biosystems.
    Salinas, Carlota
    Centre for Automation and Robotics UPM-CSIC.
    Vitzrabin, Efi
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev.
    Using ROS for agricultural robotics: design considerations and experiences2014In: RHEA-2014 / [ed] Pablo Gonzalez-de-Santos and Angela Ribeiro, 2014, p. 509-518Conference paper (Refereed)
    Abstract [en]

    We report on experiences of using the ROS middleware for developmentof agricultural robots. We describe software related design considerations for all maincomponents in developed subsystems as well as drawbacks and advantages with thechosen approaches. This work was partly funded by the European Commission(CROPS GA no 246252).

  • 8.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Dignum, Frank
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Increasing robot understandability through social practices2022In: Proceedings of Cultu-Ro 2022, Workshop on Cultural Influences in Human-Robot Interaction: Today and Tomorrow: 31st IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 22), 2022Conference paper (Refereed)
    Abstract [en]

    In this short paper we discuss how incorporatingsocial practices in robotics may contribute to how well humansunderstand robots’ actions and intentions. Since social practicestypically are applied by all interacting parties, also the robots’understanding of the humans may improve.We further discuss how the involved mechanisms have to beadjusted to fit the cultural context in which the interaction takesplace, and how social practices may have to be transformed tofit a robot’s capabilities and limitations.

  • 9.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Drewes, Frank
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Grammatical Inference of Graph Transformation Rules2015In: Proceedings of the 7th Workshop on Non-Classical Modelsof Automata and Applications (NCMA 2015), Austrian Computer Society , 2015, p. 73-90Conference paper (Refereed)
  • 10.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    On ambiguity in learning from demonstration2010In: Intelligent Autonomous Systems 11 (IAS-11) / [ed] H. Christensen, F. Groen, and E. Petriu, Amsterdam: IOS Press , 2010, p. 47-56Conference paper (Refereed)
    Abstract [en]

    An overlooked problem in Learning From Demonstration is the ambiguity that arises, for instance, when the robot is equipped with more sensors than necessary for a certain task. Simply trying to repeat all aspects of a demonstration is seldom what the human teacher wants, and without additional information, it is hard for the robot to know which features are relevant and which should be ignored. This means that a single demonstration maps to several different behaviours the teacher might have intended. This one-to-many (or many-to-many) mapping from a demonstration (or several demonstrations) into possible intended behaviours is the ambiguity that is the topic of this paper. Ambiguity is defined as the size of the current hypothesis space. We investigate the nature of the ambiguity for different kinds of hypothesis spaces and how it is reduced by a new concept learning algorithm.

  • 11.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, ThomasUmeå University, Faculty of Science and Technology, Department of Computing Science.
    Proceedings of Umeå's 18th student conference in computing science: USCCS 2014.12014Conference proceedings (editor) (Other academic)
    Download full text (pdf)
    fulltext
  • 12.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, ThomasUmeå University, Faculty of Science and Technology, Department of Computing Science.
    Proceedings of Umeå's 20th student conference in computing science: USCCS 20162016Conference proceedings (editor) (Other academic)
    Download full text (pdf)
    fulltext
  • 13.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, ThomasUmeå University, Faculty of Science and Technology, Department of Computing Science.
    Proceedings of Umeå's 21st student conference in computing science: USCCS 20172017Conference proceedings (editor) (Other academic)
    Download full text (pdf)
    fulltext
  • 14.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Department of Computing Science.
    Hellström, ThomasUmeå University, Faculty of Science and Technology, Department of Computing Science. Department of Computing Science.
    Proceedings of Umeå's 22nd Student Conference in Computing Science (USCCS 2018)2018Conference proceedings (editor) (Other academic)
    Download full text (pdf)
    fulltext
  • 15.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, ThomasUmeå University, Faculty of Science and Technology, Department of Computing Science.
    Proceedings of Umeå's 23rd Student Conference in Computing Science: USCCS 20192019Conference proceedings (editor) (Other academic)
    Abstract [en]

    The Umeå Student Conference in Computing Science (USCCS) is organized annually as part of a course given by the Computing Science department at Umeå University. The objective of the course is to give the students a practical introduction to independent research, scientific writing, and oral presentation.

    A student who participates in the course first selects a topic and a research question that he or she is interested in. If the topic is accepted, the student outlines a paper and composes an annotated bibliography to give a survey of the research topic. The main work consists of conducting the actual research that answers the question asked, and convincingly and clearly reporting the results in a scientific paper. Another major part of the course is multiple internal peer review meetings in which groups of students read each others’ papers and give feedback to the author. This process gives valuable training in both giving and receiving criticism in a constructive manner. Altogether, the students learn to formulate and develop their own ideas in a scientific manner, in a process involving internal peer reviewing of each other’s work and under supervision of the teachers, and incremental development and refinement of a scientific paper.

    Each scientific paper is submitted to USCCS through an on-line submission system, and receives reviews written by members of the Computing Science department. Based on the review, the editors of the conference proceedings (the teachers of the course) issue a decision of preliminary acceptance of the paper to each author. If, after final revision, a paper is accepted, the student is given the opportunity to present the work at the conference. The review process and the conference format aims at mimicking realistic settings for publishing and participation at scientific conferences.

    Download full text (pdf)
    fulltext
  • 16.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Towards Proactive Robot Behavior Based on Incremental Language Analysis2014In: MMRWHRI '14 Proceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction / [ed] Mary Ellen Foster, Manuel Giuliani, Ronald P. A. Petrick, 2014, p. 21-22Conference paper (Refereed)
    Abstract [en]

    This paper describes ongoing and planned work on incremental language processing coupled to inference of expected robot actions. Utterances are processed word-by-word, simultaneously with inference of expected robot actions, thus enabling the robot to prepare and act proactively to human utterances. We believe that such a model results in more natural human-robot communication since proactive behavior is a feature of human-human communication.

  • 17.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Department of Computing Science.
    Jevtic, Aleksandar
    Institut de Robotica i Informatica Industrial, Technical University of Catalonia, Spain.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    On Interaction Quality in Human-Robot Interaction2017In: Proceedings of the 9th International Conference on Agents and Artificial Intelligence / [ed] H. Jaap van den Herik, Ana Paula Rocha, Joaquim Filipe, Setúbal: SciTePress, 2017, Vol. 1, p. 182-189Conference paper (Refereed)
    Abstract [en]

    In many complex robotics systems, interaction takes place in all directions between human, robot, and environment. Performance of such a system depends on this interaction, and a proper evaluation of a system must build on a proper modeling of interaction, a relevant set of performance metrics, and a methodology to combine metrics into a single performance value. In this paper, existing models of human-robot interaction are adapted to fit complex scenarios with one or several humans and robots. The interaction and the evaluation process is formalized, and a general method to fuse performance values over time and for several performance metrics is presented. The resulting value, denoted interaction quality, adds a dimension to ordinary performance metrics by being explicit about the interplay between performance metrics, and thereby provides a formal framework to understand, model, and address complex aspects of evaluation of human-robot interaction. 

  • 18.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Sun, Jiangeng
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bandera Rubio, Juan Pedro
    Department of Electronic Technology, University of Málaga, Málaga, Spain.
    Romero-Garcés, Adrián
    Department of Electronic Technology, University of Málaga, Málaga, Spain.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Personalised multi-modal communication for HRI2023Conference paper (Refereed)
    Abstract [en]

    One important aspect when designing understandable robots is how robots should communicate with a human user to be understood in the best way. In elder care applications this is particularly important, and also difficult since many older adults suffer from various kinds of impairments. In this paper we present a solution where communication modality and communication parameters are adapted to fit both a user profile and an environment model comprising information about light and sound conditions that may affect communication. The Rasa dialogue manager is complemented with necessary functionality, and the operation is verified with a Pepper robot interacting with several personas with impaired vision, hearing, and cognition. Several relevant ethical questions are identified and briefly discussed, as a contribution to the WARN workshop.

    Download full text (pdf)
    fulltext
  • 19.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A formalism for learning from demonstration2010In: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 1, no 1, p. 1-13Article in journal (Refereed)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as mappings between some of these spaces. Finally, behavior primitives are introduced as one example of good bias in learning, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination. The formalism is exemplified through a sequence learning task where a robot equipped with a gripper arm is to move objects to specific areas. The introduced concepts are illustrated with special focus on how bias of various kinds can be used to enable learning from a single demonstration, and how ambiguities in demonstrations can be identified and handled.

    Download full text (pdf)
    FULLTEXT01
  • 20.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Behavior recognition for segmentation of demonstrated tasks2008In: IEEE SMC International Conference on Distributed Human-Machine Systems (DHMS), 2008Conference paper (Refereed)
    Abstract [en]

    One common approach to the robot learning technique Learning From Demonstration, is to use a set of pre-programmed skills as building blocks for more complex tasks. One important part of this approach is recognition of these skills in a demonstration comprising a stream of sensor and actuator data. In this paper, three novel techniques for behavior recognition are presented and compared. The first technique is function-oriented and compares actions for similar inputs. The second technique is based on auto-associative neural networks and compares reconstruction errors in sensory-motor space. The third technique is based on S-Learning and compares sequences of patterns in sensory-motor space. All three techniques compute an activity level which can be seen as an alternative to a pure classification approach. Performed tests show how the former approach allows a more informative interpretation of a demonstration, by not determining "correct" behaviors but rather a number of alternative interpretations.

  • 21.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Formalising learning from demonstration2008Report (Other academic)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. Inspired by the work on planning and actuation by LaValle, common LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as the mappings between some of these spaces. Finally, behavior primitives are introduced as one example of useful bias in the learning process, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination.

    Download full text (pdf)
    FULLTEXT01
  • 22.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Predictive Learning in Context2010In: Proceedings of the tenth international conference on epigenetic robotics: modeling cognitive development in robotic systems / [ed] Birger Johansson, Erol Sahin & Christian Balkenius, Lund, Sweden, 2010, p. 157-158Conference paper (Refereed)
  • 23.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Behavior recognition for learning from demonstration2010In: 2010 IEEE International Conference on Robotics and Automation / [ed] Nancy M. Amato et. al, IEEE, 2010, p. 866-872Conference paper (Refereed)
    Abstract [en]

    Two methods for behavior recognition are presented and evaluated. Both methods are based on the dynamic temporal difference algorithm Predictive Sequence Learning (PSL) which has previously been proposed as a learning algorithm for robot control. One strength of the proposed recognition methods is that the model PSL builds to recognize behaviors is identical to that used for control, implying that the controller (inverse model) and the recognition algorithm (forward model) can be implemented as two aspects of the same model. The two proposed methods, PSLE-Comparison and PSLH-Comparison, are evaluated in a Learning from Demonstration setting, where each algorithm should recognize a known skill in a demonstration performed via teleoperation. PSLH-Comparison produced the smallest recognition error. The results indicate that PSLH-Comparison could be a suitable algorithm for integration in a hierarchical control system consistent with recent models of human perception and motor control.

    Download full text (pdf)
    FULLTEXT01
  • 24.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Model-free learning from demonstration2010In: ICAART 2010 - Proceedings of the international conference on agents and artificial intelligence:  volume 2 / [ed] Joaquim Filipe, Ana LN Fred, Bernadette Sharp, Portugal: INSTICC , 2010, p. 62-71Conference paper (Refereed)
    Abstract [en]

    A novel robot learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated. PSL is a model-free prediction algorithm inspired by the dynamic temporal difference algorithm S-Learning. While S-Learning has previously been applied as a reinforcement learning algorithm for robots, PSL is here applied to a Learning from Demonstration problem. The proposed algorithm is evaluated on four tasks using a Khepera II robot. PSL builds a model from demonstrated data which is used to repeat the demonstrated behavior. After training, PSL can control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. PSL was able to successfully learn and repeat the first three (elementary) tasks, but it was unable to successfully repeat the fourth (composed) behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

    Download full text (pdf)
    fulltext
  • 25.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Predictive learning from demonstration2011In: Agents and artificial Intelligence: Second International Conference, ICAART 2010, Valencia, Spain, January 22-24, 2010. Revised Selected Papers / [ed] Filipe, Joaquim, Fred, Ana, Sharp, Bernadette, Berlin: Springer Verlag , 2011, 1, p. 186-200Chapter in book (Refereed)
    Abstract [en]

    A model-free learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL is inspired by several functional models of the brain. It constructs sequences of predictable sensory-motor patterns, without relying on predefined higher-level concepts. The algorithm is demonstrated on a Khepera II robot in four different tasks. During training, PSL generates a hypothesis library from demonstrated data. The library is then used to control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. In this way, the robot reproduces the demonstrated behavior. PSL is able to successfully learn and repeat three elementary tasks, but is unable to repeat a fourth, composed behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

    Download full text (pdf)
    fulltext
  • 26.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Robot learning from demonstration using predictive sequence learning2011In: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH, 2011, p. 235-250Chapter in book (Refereed)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 27.
    Billing, Erik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Simultaneous control and recognition of demonstrated behavior2011Report (Other academic)
    Abstract [en]

    A method for Learning from Demonstration (LFD) is presented and evaluated on a simulated Robosoft Kompai robot. The presented algorithm, called Predictive Sequence Learning (PSL), builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. The generated rule base can be used to control the robot and to predict expected sensor events in response to executed actions. The rule base can be trained under different contexts, represented as fuzzy sets. In the present work, contexts are used to represent different behaviors. Several behaviors can in this way be stored in the same rule base and partly share information. The context that best matches present circumstances can be identified using the predictive model and the robot can in this way automatically identify the most suitable behavior for precent circumstances. The performance of PSL as a method for LFD is evaluated with, and without, contextual information. The results indicate that PSL without contexts can learn and reproduce simple behaviors. The system also successfully identifies the most suitable context in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contexts. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

    Download full text (pdf)
    fulltext
  • 28. Billing, Erik
    et al.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars-Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Simultaneous recognition and reproduction of demonstrated behavior2015In: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, p. 43-53Article in journal (Refereed)
    Abstract [en]

    Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: (1) to identify which behavior that best matches current context and (2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction.

  • 29.
    Bliek, Adna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bensch, Suna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    How Can a Robot Trigger Human Backchanneling?2020In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2020, p. 96-103Conference paper (Refereed)
    Abstract [en]

    In human communication, backchanneling is an important part of the natural interaction protocol. The purpose is to signify the listener’s attention, understanding, agreement, or to indicate that a speaker should go on talking. While the effects of backchanneling robots on humans have been investigated, studies of how and when humans backchannel to talking robots is poorly studied. In this paper we investigate how the robot’s behavior as a speaker affects a human listener’s backchanneling behavior. This is interesting in Human-Robot Interaction since backchanneling between humans has been shown to support more fluid interactions, and human-robot interaction would therefore benefit from mimicking this human communication feature. The results show that backchanneling increases when the robot exhibits backchannel-inviting cues such as pauses and gestures. Furthermore, clear differences between how a human backchannels to another human and to a robot are shown.

  • 30.
    Bliek, Adna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bensch, Suna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    How Can a Robot Trigger Human Backchanneling?2020In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2020, p. 96-103Conference paper (Refereed)
    Abstract [en]

    In human communication, backchanneling is an important part of the natural interaction protocol. The purpose is to signify the listener's attention, understanding, agreement, or to indicate that a speaker should go on talking. While the effects of backchanneling robots on humans have been investigated, studies of how and when humans backchannel to talking robots is poorly studied. In this paper we investigate how the robot's behavior as a speaker affects a human listener's backchanneling behavior. This is interesting in Human -Robot Interaction since backchanneling between humans has been shown to support more fluid interactions, and human -robot interaction would therefore benefit from mimicking this human communication feature. The results show that backchanneling increases when the robot exhibits backchannel-inviting cues such as pauses and gestures. Furthermore, clear differences between how a human backchannels to another human and to a robot are shown.

  • 31. Bontsema, J.
    et al.
    Hemming, J.
    Pekkeriet, E.
    Saeys, W.
    Edan, Y.
    Shapiro, A.
    Hočevar, M.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Oberti, R.
    Armada, M.
    Ulbrich, H.
    Baur, J.
    Debilde, B.
    Best, S.
    Evain, S.
    Gauchel, W.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    CROPS: high tech agricultural robots2014Conference paper (Other academic)
  • 32. Bontsema, Jan
    et al.
    Hemming, Jochen
    Pekkeriet, Erik
    Saeys, Wouter
    Edan, Yael
    Shapiro, Amir
    Hočevar, Marko
    Oberti, Roberto
    Armada, Manuel
    Ulbrich, Heinz
    Baur, Jörg
    Debilde, Benoit
    Best, Stanley
    Evain, Sébastien
    Gauchel, Wolfgang
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    CROPS: Clever Robots for Crops2015In: Engineering & Technology Reference, ISSN 2056-4007, Vol. 1, no 1Article in journal (Refereed)
    Abstract [en]

    In the EU-funded CROPS project robots are developed for site-specific spraying and selective harvesting of fruit and fruit vegetables. The robots are being designed to harvest crops, such as greenhouse vegetables, apples, grapes and for canopy spraying in orchards and for precision target spraying in grape vines. Attention is paid to the detection of obstacles for autonomous navigation in a safe way in plantations and forests. For the different applications, platforms were built. Sensing systems and vision algorithms have been developed. For software the Robot Operating System is used. A 9 degrees of freedom manipulator was designed and tested for sweet-pepper harvesting, apple harvesting and in close range spraying. For the applications different end-effectors were designed and tested. For sweet pepper a platform that can move in between the crop rows on the common greenhouse rail system which also serves as heating pipes was built. The apple harvesting platform is based on a current mechanical grape harvester. In discussion with growers so-called ‘walls of fruit trees’ have been designed which bring robots closer to the practice. A canopy-optimised sprayer has been designed as a trailed sprayer with a centrifugal blower. All the applications have been tested under practical conditions.

  • 33.
    Edström, Filip
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    de Luna, Xavier
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Robot causal discovery aided by human interaction2023In: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2023, p. 1731-1736Conference paper (Refereed)
    Abstract [en]

    Causality is relatively unexplored in robotics even if it is highly relevant, in several respects. In this paper, we study how a robot’s causal understanding can be improved by allowing the robot to ask humans causal questions. We propose a general algorithm for selecting direct causal effects to ask about, given a partial causal representation (using partially directed acyclic graphs, PDAGs) obtained from observational data. We propose three versions of the algorithm inspired by different causal discovery techniques, such as constraint-based, score-based, and interventions. We evaluate the versions in a simulation study and our results show that asking causal questions improves the causal representation over all simulated scenarios. Further, the results show that asking causal questions based on PDAGs discovered from data provides a significant improvement compared to asking questions at random, and the version inspired by score-based techniques performs particularly well over all simulated experiments.

  • 34.
    Fonooni, Benjamin
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars-Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Learning High-Level Behaviors From Demonstration Through Semantic Networks2012In: Proceedings of 4th International Conference on Agents and Artificial Intelligence, 2012, p. 419-426Conference paper (Refereed)
    Abstract [en]

    In this paper we present an approach for high-level behavior recognition and selection integrated with alow-level controller to help the robot to learn new skills from demonstrations. By means of SemanticNetwork as the core of the method, the robot gains the ability to model the world with concepts and relatethem to low-level sensory-motor states. We also show how the generalization ability of Semantic Networkscan be used to extend learned skills to new situations.

    Download full text (pdf)
    fulltext
  • 35.
    Fonooni, Benjamin
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars-Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration2013In: 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2013, p. 67-74Conference paper (Refereed)
    Abstract [en]

    This paper gives a brief overview of challenges indesigning cognitive architectures for Learning fromDemonstration. By investigating features and functionality ofsome related architectures, we propose a modular architectureparticularly suited for sequential learning high-levelrepresentations of behaviors. We head towards designing andimplementing goal based imitation learning that not only allowsthe robot to learn necessary conditions for executing particularbehaviors, but also to understand the intents of the tutor andreproduce the same behaviors accordingly.

  • 36.
    Fonooni, Benjamin
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Jevtić, Aleksandar
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars-Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations2015In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 65, p. 24-39Article in journal (Refereed)
    Abstract [en]

    In domains where robots carry out human’s tasks, the ability to learn new behaviors easily and quickly plays an important role. Two major challenges with Learning from Demonstration (LfD) are to identify what information in a demonstrated behavior requires attention by the robot, and to generalize the learned behavior such that the robot is able to perform the same behavior in novel situations. The main goal of this paper is to incorporate Ant Colony Optimization (ACO) algorithms into LfD in an approach that focuses on understanding tutor's intentions and learning conditions to exhibit a behavior. The proposed method combines ACO algorithms with semantic networks and spreading activation mechanism to reason and generalize the knowledge obtained through demonstrations. The approach also provides structures for behavior reproduction under new circumstances. Finally, applicability of the system in an object shape classification scenario is evaluated.

  • 37.
    Fonooni, Benjamin
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Thomas, Hellström
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Applying a priming mechanism for intention recognition in shared control2015In: 2015 IEEE international multi-disciplinary conference on cognitive methods in situation awareness and decision support (CogSIMA), IEEE, 2015, p. 35-41Conference paper (Refereed)
    Abstract [en]

    In many robotics shared control applications, users are forced to focus hard on the robot due to the task’s high sensitivity or the robot’s misunderstanding of the user’s intention. This brings frustration and dissatisfaction to the user and reduces overall efficiency. The user’s intention is sometimes unclear and hard to identify without some kind of bias in the identification process. In this paper, we present a solution in which an attentional mechanism helps the robot to recognize the user’s intention. The solution uses a priming mechanism and parameterized behavior primitives to support intention recognition and improve shared control for teleoperation tasks.

  • 38.
    Fonooni, Benjamin
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Thomas, Hellström
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    On the Similarities Between Control Based and Behavior Based Visual Servoing2015In: Proceedings of the 30th Annual ACM Symposium on Applied Computing, New York: Association for Computing Machinery (ACM), 2015, p. 320-326Conference paper (Refereed)
    Abstract [en]

    Abstract Robotics is tightly connected to both artificial intelligence (AI) and control theory. Both AI and control based robotics are active and successful research areas, but research is often conducted by well separated communities. In this paper, we compare the two approaches in a case study for the design of a robot that should move its arm towards an object with the help of camera data. The control based approach is a model-free version of Image Based Visual Servoing (IBVS), which is based on mathematical modeling of the sensing and motion task. The AI approach, here denoted Behavior-Based Visual Servoing (BBVS), contains elements that are biologically plausible and inspired by schema-theory. We show how the two approaches lead to very similar solutions, even identical given a few simplifying assumptions. This similarity is shown both analytically and numerically. However, in a simple picking task with a 3 DoF robot arm, BBVS shows significantly higher performance than the IBVS approach, partly because it contains more manually tuned parameters. While the results obviously do not apply to all tasks and solutions, it illustrates both strengths and weaknesses with both approaches, and how they are tightly connected and share many similarities despite very different starting points and methodologies.

  • 39.
    Fonooni, Benjamin
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Thomas, Hellström
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Janlert, Lars-Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Priming as a means to reduce ambiguity in learning from demonstration2016In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 8, no 1, p. 5-19Article in journal (Refereed)
    Abstract [en]

    Learning from Demonstration (LfD) is an established robot learning technique by which a robot acquires a skill by observing a human or robot teacher demonstrating the skill. In this paper we address the ambiguity involved in inferring the intention with one or several demonstrations. We suggest a method based on priming, and a memory model with similarities to human learning. Conducted experiments show that the developed method leads to faster and improved understanding of the intention with a demonstration by reducing ambiguity.

  • 40.
    Hamrin, Maria
    et al.
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Norqvist, Patrik
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Andre, Mats
    Eriksson, AI
    A statistical study of ion energization at 1700 km in the auroral region2002In: Annales Geophysicae, ISSN 0992-7689, E-ISSN 1432-0576, Vol. 20, no 12, p. 1943-1958Article in journal (Refereed)
    Abstract [en]

    We present a comprehensive overview of several potentially relevant causes for the oxygen energization in the auroral region. Data from the Freja satellite near 1700 km altitude are used for an unconditional statistical investigation. The data are obtained in the Northern Hemisphere during 21 months in the declining phase of the solar cycle. The importance of various wave types for the ion energization is statistically studied. We also investigate the correlation of ion heating with precipitating protons, accelerated auroral electrons, suprathermal electron bursts, the electron density variations, K-P index and solar illumination of the nearest conjugate ionosphere. We find that sufficiently strong broadband ELF waves, electromagnetic ion cyclotron waves, and waves around the lower hybrid frequency are foremost associated with the ion heating. However, magnetosonic waves, with a sharp, lower frequency cutoff just below the proton gyrofrequency, are not found to contribute to the ion heating. In the absence of the first three wave emissions, transversely energized ions are rare. These wave types are approximately equally efficient in heating the ions, but we find that the main source for the heating is broadband ELF waves, since they are most common in the auroral region. We have also observed that the conditions for ion heating are more favourable for smaller ratios of the spectral densities S-E/S-B of the broadband ELF waves at the oxygen gyrofrequency.

    Download full text (pdf)
    fulltext
  • 41.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A random walk through the stock market1998Other (Other academic)
  • 42.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    AI and its consequences for the written word2023In: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 6, article id 1326166Article in journal (Refereed)
    Abstract [en]

    The latest developments of chatbots driven by Large Language Models (LLMs), more specifically ChatGPT, have shaken the foundations of how text is created, and may drastically reduce and change the need, ability, and valuation of human writing. Furthermore, our trust in the written word is likely to decrease, as an increasing proportion of all written text will be AI-generated – and potentially incorrect. In this essay, I discuss these implications and possible scenarios for us humans, and for AI itself.

    Download full text (pdf)
    fulltext
  • 43.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    An intelligent rollator with steering by braking2012Report (Other academic)
    Abstract [en]

    Walking aids such as rollators help a lot of individuals to maintain mobility and independence. While these devices clearly improve balance and mobility they also lead to increased risk of falling accidents. With an increasing proportion of elderly in the population, there is a clear need for improving these devices. This paper describes ongoing work on the development of ROAR - an intelligent rollator that can help users with limited vision, cognition or motoric abilities. Automatic detection and avoidance of obstacles such as furniture and doorposts simplify usage in cluttered indoor environments. For outdoors usage, the design includes a function to avoid curbs and other holes that may otherwise cause serious accidents. Ongoing work includes a novel approach to compensate for sideway drift that occur both indoors and outdoors for users with certain types of cognitive or motoric disabilities. Also the control mechanism differs from other similar designs. Steering