Umeå University's logo

umu.sePublikasjoner
Endre søk
Begrens søket
12 1 - 50 of 99
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Abedin, Md Reaz Ashraful
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Bensch, Suna
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Self-supervised language grounding by active sensing combined with Internet acquired images and text2017Inngår i: Proceedings of the Fourth International Workshop on Recognition and Action for Scene Understanding (REACTS2017) / [ed] Jorge Dias George Azzopardi, Rebeca Marf, Málaga: REACTS , 2017, s. 71-83Konferansepaper (Fagfellevurdert)
    Abstract [en]

    For natural and efficient verbal communication between a robot and humans, the robot should be able to learn names and appearances of new objects it encounters. In this paper we present a solution combining active sensing of images with text based and image based search on the Internet. The approach allows the robot to learn both object name and how to recognise similar objects in the future, all self-supervised without human assistance. One part of the solution is a novel iterative method to determine the object name using image classi- fication, acquisition of images from additional viewpoints, and Internet search. In this paper, the algorithmic part of the proposed solution is presented together with evaluations using manually acquired camera images, while Internet data was acquired through direct and reverse image search with Google, Bing, and Yandex. Classification with multi-classSVM and with five different features settings were evaluated. With five object classes, the best performing classifier used a combination of Pyramid of Histogram of Visual Words (PHOW) and Pyramid of Histogram of Oriented Gradient (PHOG) features, and reached a precision of 80% and a recall of 78%.

    Fulltekst (pdf)
    fulltext
  • 2.
    Ali, W
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Georgsson, Fredrik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Visual tree detection for autonomous navigation in forest environment2008Inngår i: IEEE Intelligent Vehicles SymposiumConference Location: Eindhoven, NETHERLANDS, 2008, , s. 1144-1149s. 1144-1149Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper describes a classification based tree detection method for autonomous navigation of forest vehicles in forest environment. Fusion of color, and texture cues has been used to segment the image into tree trunk and background objects. The segmentation of images into tree trunk and background objects is a challenging task due to high variations of illumination, effect of different color shades, non-homogeneous bark texture, shadows and foreshortening. To accomplish this, the approach has been to find the best combinations of color, and texture descriptors, and classification techniques. An additional task has been to estimate the distance between forest vehicle and the base of segmented trees using monocular vision. A simple heuristic distance measurement method is proposed that is based on pixel height and a reference width. The performance of various color and texture operators, and accuracy of classifiers has been evaluated using cross validation techniques.

  • 3.
    Arad, Boaz
    et al.
    Department of Computer Science, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Balendonck, Jos
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Barth, Ruud
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Ben-Shahar, Ohad
    Department of Computer Science, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hemming, Jochen
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Ringdahl, Ola
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Tielen, Toon
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    van Tuijl, Bart
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Development of a sweet pepper harvesting robot2020Inngår i: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 37, nr 6, s. 1027-1039Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    This paper presents the development, testing and validation of SWEEPER, a robot for harvesting sweet pepper fruit in greenhouses. The robotic system includes a six degrees of freedom industrial arm equipped with a specially designed end effector, RGB-D camera, high-end computer with graphics processing unit, programmable logic controllers, other electronic equipment, and a small container to store harvested fruit. All is mounted on a cart that autonomously drives on pipe rails and concrete floor in the end-user environment. The overall operation of the harvesting robot is described along with details of the algorithms for fruit detection and localization, grasp pose estimation, and motion control. The main contributions of this paper are the integrated system design and its validation and extensive field testing in a commercial greenhouse for different varieties and growing conditions. A total of 262 fruits were involved in a 4-week long testing period. The average cycle time to harvest a fruit was 24 s. Logistics took approximately 50% of this time (7.8 s for discharge of fruit and 4.7 s for platform movements). Laboratory experiments have proven that the cycle time can be reduced to 15 s by running the robot manipulator at a higher speed. The harvest success rates were 61% for the best fit crop conditions and 18% in current crop conditions. This reveals the importance of finding the best fit crop conditions and crop varieties for successful robotic harvesting. The SWEEPER robot is the first sweet pepper harvesting robot to demonstrate this kind of performance in a commercial greenhouse.

    Fulltekst (pdf)
    fulltext
  • 4. Arafat, Yeasin
    et al.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Rashid, Jayedur
    Parameterized sensor model and an approach for measuring goodness of robotic maps2010Inngår i: Proceedings of the 15th IASTED International Conference on Robotics and Applications (RA 2010), ACTA Press, 2010Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Map building is a classical problem in mobile and au tonomous robotics, and sensor models is a way to interpret raw sensory information, especially for building maps. In this paper we propose a parameterized sensor model, and optimize map goodness with respect to these parameters. A new approach, measuring the goodness of maps without a handcrafted map of the actual environment is introduced and evaluated. Three different techniques; statistical anal ysis, derivative of images, and comparison of binary maps have been used as estimates of map goodness. The results show that the proposed sensor model generates better maps than a standard sensor model. However, the proposed ap proach of measuring goodness of maps does not improve the results as much as expected.

  • 5.
    Athanassiadis, Dimitris
    et al.
    Dept. of Forest Resource Management, Swedish University of Agricultural Sciences.
    Bergström, Dan
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Lindroos, Ola
    Dept. of Forest Resource Management, Swedish University of Agricultural Sciences.
    Nordfjell, Tomas
    Dept. of Forest Resource Management, Swedish University of Agricultural Sciences.
    Ringdahl, Ola
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Path tracking for autonomous forwarders in forest terrain2010Inngår i: Precision Forestry Symposium: developments in Precision Forestry since 2006 / [ed] Ackerman P A, Ham H, & Lu C, 2010, s. 42-43Konferansepaper (Fagfellevurdert)
    Fulltekst (pdf)
    FULLTEXT02
  • 6.
    Baranwal, Neha
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Singh, Avinash
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Fusion of Gesture and Speech for Increased Accuracy in Human Robot Interaction2019Inngår i: 2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR), IEEE, 2019, s. 139-144Konferansepaper (Fagfellevurdert)
    Abstract [en]

    An approach for decision-level fusion for gesture and speech based human-robot interaction (HRI) is proposed. A rule-based method is compared with several machine learning approaches. Gestures and speech signals are initially classified using hidden Markov models, reaching accuracies of 89.6% and 84% respectively. The rule-based approach reached 91.6% while SVM, which was the best of all evaluated machine learning algorithms, reached an accuracy of 98.2% on the test data. A complete framework is deployed in real time humanoid robot (NAO) which proves the efficacy of the system.

  • 7.
    Barth, Ruud
    et al.
    Greenhouse Horticulture, Wageningen University & Research Center.
    Baur, Jörg
    Institute of Applied Mechanics, Technische Universität München.
    Buschmann, Thomas
    Institute of Applied Mechanics, Technische Universität München.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Nguyen, Thanh
    KU Leuven, Department of Biosystems.
    Ringdahl, Ola
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Saeys, Wouter
    KU Leuven, Department of Biosystems.
    Salinas, Carlota
    Centre for Automation and Robotics UPM-CSIC.
    Vitzrabin, Efi
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev.
    Using ROS for agricultural robotics: design considerations and experiences2014Inngår i: RHEA-2014 / [ed] Pablo Gonzalez-de-Santos and Angela Ribeiro, 2014, s. 509-518Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We report on experiences of using the ROS middleware for developmentof agricultural robots. We describe software related design considerations for all maincomponents in developed subsystems as well as drawbacks and advantages with thechosen approaches. This work was partly funded by the European Commission(CROPS GA no 246252).

  • 8.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Dignum, Frank
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Increasing robot understandability through social practices2022Inngår i: Proceedings of Cultu-Ro 2022, Workshop on Cultural Influences in Human-Robot Interaction: Today and Tomorrow: 31st IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 22), 2022Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this short paper we discuss how incorporatingsocial practices in robotics may contribute to how well humansunderstand robots’ actions and intentions. Since social practicestypically are applied by all interacting parties, also the robots’understanding of the humans may improve.We further discuss how the involved mechanisms have to beadjusted to fit the cultural context in which the interaction takesplace, and how social practices may have to be transformed tofit a robot’s capabilities and limitations.

  • 9.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Drewes, Frank
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Grammatical Inference of Graph Transformation Rules2015Inngår i: Proceedings of the 7th Workshop on Non-Classical Modelsof Automata and Applications (NCMA 2015), Austrian Computer Society , 2015, s. 73-90Konferansepaper (Fagfellevurdert)
  • 10.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    On ambiguity in learning from demonstration2010Inngår i: Intelligent Autonomous Systems 11 (IAS-11) / [ed] H. Christensen, F. Groen, and E. Petriu, Amsterdam: IOS Press , 2010, s. 47-56Konferansepaper (Fagfellevurdert)
    Abstract [en]

    An overlooked problem in Learning From Demonstration is the ambiguity that arises, for instance, when the robot is equipped with more sensors than necessary for a certain task. Simply trying to repeat all aspects of a demonstration is seldom what the human teacher wants, and without additional information, it is hard for the robot to know which features are relevant and which should be ignored. This means that a single demonstration maps to several different behaviours the teacher might have intended. This one-to-many (or many-to-many) mapping from a demonstration (or several demonstrations) into possible intended behaviours is the ambiguity that is the topic of this paper. Ambiguity is defined as the size of the current hypothesis space. We investigate the nature of the ambiguity for different kinds of hypothesis spaces and how it is reduced by a new concept learning algorithm.

  • 11.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, ThomasUmeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Proceedings of Umeå's 18th student conference in computing science: USCCS 2014.12014Konferanseproceedings (Annet vitenskapelig)
    Fulltekst (pdf)
    fulltext
  • 12.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, ThomasUmeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Proceedings of Umeå's 20th student conference in computing science: USCCS 20162016Konferanseproceedings (Annet vitenskapelig)
    Fulltekst (pdf)
    fulltext
  • 13.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, ThomasUmeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Proceedings of Umeå's 21st student conference in computing science: USCCS 20172017Konferanseproceedings (Annet vitenskapelig)
    Fulltekst (pdf)
    fulltext
  • 14.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Department of Computing Science.
    Hellström, ThomasUmeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Department of Computing Science.
    Proceedings of Umeå's 22nd Student Conference in Computing Science (USCCS 2018)2018Konferanseproceedings (Annet vitenskapelig)
    Fulltekst (pdf)
    fulltext
  • 15.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, ThomasUmeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Proceedings of Umeå's 23rd Student Conference in Computing Science: USCCS 20192019Konferanseproceedings (Annet vitenskapelig)
    Abstract [en]

    The Umeå Student Conference in Computing Science (USCCS) is organized annually as part of a course given by the Computing Science department at Umeå University. The objective of the course is to give the students a practical introduction to independent research, scientific writing, and oral presentation.

    A student who participates in the course first selects a topic and a research question that he or she is interested in. If the topic is accepted, the student outlines a paper and composes an annotated bibliography to give a survey of the research topic. The main work consists of conducting the actual research that answers the question asked, and convincingly and clearly reporting the results in a scientific paper. Another major part of the course is multiple internal peer review meetings in which groups of students read each others’ papers and give feedback to the author. This process gives valuable training in both giving and receiving criticism in a constructive manner. Altogether, the students learn to formulate and develop their own ideas in a scientific manner, in a process involving internal peer reviewing of each other’s work and under supervision of the teachers, and incremental development and refinement of a scientific paper.

    Each scientific paper is submitted to USCCS through an on-line submission system, and receives reviews written by members of the Computing Science department. Based on the review, the editors of the conference proceedings (the teachers of the course) issue a decision of preliminary acceptance of the paper to each author. If, after final revision, a paper is accepted, the student is given the opportunity to present the work at the conference. The review process and the conference format aims at mimicking realistic settings for publishing and participation at scientific conferences.

    Fulltekst (pdf)
    fulltext
  • 16.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Towards Proactive Robot Behavior Based on Incremental Language Analysis2014Inngår i: MMRWHRI '14 Proceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction / [ed] Mary Ellen Foster, Manuel Giuliani, Ronald P. A. Petrick, 2014, s. 21-22Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper describes ongoing and planned work on incremental language processing coupled to inference of expected robot actions. Utterances are processed word-by-word, simultaneously with inference of expected robot actions, thus enabling the robot to prepare and act proactively to human utterances. We believe that such a model results in more natural human-robot communication since proactive behavior is a feature of human-human communication.

  • 17.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Department of Computing Science.
    Jevtic, Aleksandar
    Institut de Robotica i Informatica Industrial, Technical University of Catalonia, Spain.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    On Interaction Quality in Human-Robot Interaction2017Inngår i: Proceedings of the 9th International Conference on Agents and Artificial Intelligence / [ed] H. Jaap van den Herik, Ana Paula Rocha, Joaquim Filipe, Setúbal: SciTePress, 2017, Vol. 1, s. 182-189Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In many complex robotics systems, interaction takes place in all directions between human, robot, and environment. Performance of such a system depends on this interaction, and a proper evaluation of a system must build on a proper modeling of interaction, a relevant set of performance metrics, and a methodology to combine metrics into a single performance value. In this paper, existing models of human-robot interaction are adapted to fit complex scenarios with one or several humans and robots. The interaction and the evaluation process is formalized, and a general method to fuse performance values over time and for several performance metrics is presented. The resulting value, denoted interaction quality, adds a dimension to ordinary performance metrics by being explicit about the interplay between performance metrics, and thereby provides a formal framework to understand, model, and address complex aspects of evaluation of human-robot interaction. 

  • 18.
    Bensch, Suna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Sun, Jiangeng
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Bandera Rubio, Juan Pedro
    Department of Electronic Technology, University of Málaga, Málaga, Spain.
    Romero-Garcés, Adrián
    Department of Electronic Technology, University of Málaga, Málaga, Spain.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Personalised multi-modal communication for HRI2023Konferansepaper (Fagfellevurdert)
    Abstract [en]

    One important aspect when designing understandable robots is how robots should communicate with a human user to be understood in the best way. In elder care applications this is particularly important, and also difficult since many older adults suffer from various kinds of impairments. In this paper we present a solution where communication modality and communication parameters are adapted to fit both a user profile and an environment model comprising information about light and sound conditions that may affect communication. The Rasa dialogue manager is complemented with necessary functionality, and the operation is verified with a Pepper robot interacting with several personas with impaired vision, hearing, and cognition. Several relevant ethical questions are identified and briefly discussed, as a contribution to the WARN workshop.

    Fulltekst (pdf)
    fulltext
  • 19.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    A formalism for learning from demonstration2010Inngår i: Paladyn - Journal of Behavioral Robotics, ISSN 2080-9778, E-ISSN 2081-4836, Vol. 1, nr 1, s. 1-13Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as mappings between some of these spaces. Finally, behavior primitives are introduced as one example of good bias in learning, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination. The formalism is exemplified through a sequence learning task where a robot equipped with a gripper arm is to move objects to specific areas. The introduced concepts are illustrated with special focus on how bias of various kinds can be used to enable learning from a single demonstration, and how ambiguities in demonstrations can be identified and handled.

    Fulltekst (pdf)
    FULLTEXT01
  • 20.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskaplig fakultet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskaplig fakultet, Institutionen för datavetenskap.
    Behavior recognition for segmentation of demonstrated tasks2008Inngår i: IEEE SMC International Conference on Distributed Human-Machine Systems (DHMS), 2008Konferansepaper (Fagfellevurdert)
    Abstract [en]

    One common approach to the robot learning technique Learning From Demonstration, is to use a set of pre-programmed skills as building blocks for more complex tasks. One important part of this approach is recognition of these skills in a demonstration comprising a stream of sensor and actuator data. In this paper, three novel techniques for behavior recognition are presented and compared. The first technique is function-oriented and compares actions for similar inputs. The second technique is based on auto-associative neural networks and compares reconstruction errors in sensory-motor space. The third technique is based on S-Learning and compares sequences of patterns in sensory-motor space. All three techniques compute an activity level which can be seen as an alternative to a pure classification approach. Performed tests show how the former approach allows a more informative interpretation of a demonstration, by not determining "correct" behaviors but rather a number of alternative interpretations.

  • 21.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskaplig fakultet, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskaplig fakultet, Institutionen för datavetenskap.
    Formalising learning from demonstration2008Rapport (Annet vitenskapelig)
    Abstract [en]

    The paper describes and formalizes the concepts and assumptions involved in Learning from Demonstration (LFD), a common learning technique used in robotics. Inspired by the work on planning and actuation by LaValle, common LFD-related concepts like goal, generalization, and repetition are here defined, analyzed, and put into context. Robot behaviors are described in terms of trajectories through information spaces and learning is formulated as the mappings between some of these spaces. Finally, behavior primitives are introduced as one example of useful bias in the learning process, dividing the learning process into the three stages of behavior segmentation, behavior recognition, and behavior coordination.

    Fulltekst (pdf)
    FULLTEXT01
  • 22.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Predictive Learning in Context2010Inngår i: Proceedings of the tenth international conference on epigenetic robotics: modeling cognitive development in robotic systems / [ed] Birger Johansson, Erol Sahin & Christian Balkenius, Lund, Sweden, 2010, s. 157-158Konferansepaper (Fagfellevurdert)
  • 23.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Behavior recognition for learning from demonstration2010Inngår i: 2010 IEEE International Conference on Robotics and Automation / [ed] Nancy M. Amato et. al, IEEE, 2010, s. 866-872Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Two methods for behavior recognition are presented and evaluated. Both methods are based on the dynamic temporal difference algorithm Predictive Sequence Learning (PSL) which has previously been proposed as a learning algorithm for robot control. One strength of the proposed recognition methods is that the model PSL builds to recognize behaviors is identical to that used for control, implying that the controller (inverse model) and the recognition algorithm (forward model) can be implemented as two aspects of the same model. The two proposed methods, PSLE-Comparison and PSLH-Comparison, are evaluated in a Learning from Demonstration setting, where each algorithm should recognize a known skill in a demonstration performed via teleoperation. PSLH-Comparison produced the smallest recognition error. The results indicate that PSLH-Comparison could be a suitable algorithm for integration in a hierarchical control system consistent with recent models of human perception and motor control.

    Fulltekst (pdf)
    FULLTEXT01
  • 24.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Model-free learning from demonstration2010Inngår i: ICAART 2010 - Proceedings of the international conference on agents and artificial intelligence:  volume 2 / [ed] Joaquim Filipe, Ana LN Fred, Bernadette Sharp, Portugal: INSTICC , 2010, s. 62-71Konferansepaper (Fagfellevurdert)
    Abstract [en]

    A novel robot learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated. PSL is a model-free prediction algorithm inspired by the dynamic temporal difference algorithm S-Learning. While S-Learning has previously been applied as a reinforcement learning algorithm for robots, PSL is here applied to a Learning from Demonstration problem. The proposed algorithm is evaluated on four tasks using a Khepera II robot. PSL builds a model from demonstrated data which is used to repeat the demonstrated behavior. After training, PSL can control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. PSL was able to successfully learn and repeat the first three (elementary) tasks, but it was unable to successfully repeat the fourth (composed) behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

    Fulltekst (pdf)
    fulltext
  • 25.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Predictive learning from demonstration2011Inngår i: Agents and artificial Intelligence: Second International Conference, ICAART 2010, Valencia, Spain, January 22-24, 2010. Revised Selected Papers / [ed] Filipe, Joaquim, Fred, Ana, Sharp, Bernadette, Berlin: Springer Verlag , 2011, 1, s. 186-200Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    A model-free learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL is inspired by several functional models of the brain. It constructs sequences of predictable sensory-motor patterns, without relying on predefined higher-level concepts. The algorithm is demonstrated on a Khepera II robot in four different tasks. During training, PSL generates a hypothesis library from demonstrated data. The library is then used to control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. In this way, the robot reproduces the demonstrated behavior. PSL is able to successfully learn and repeat three elementary tasks, but is unable to repeat a fourth, composed behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

    Fulltekst (pdf)
    fulltext
  • 26.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Robot learning from demonstration using predictive sequence learning2011Inngår i: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH, 2011, s. 235-250Kapittel i bok, del av antologi (Fagfellevurdert)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 27.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Simultaneous control and recognition of demonstrated behavior2011Rapport (Annet vitenskapelig)
    Abstract [en]

    A method for Learning from Demonstration (LFD) is presented and evaluated on a simulated Robosoft Kompai robot. The presented algorithm, called Predictive Sequence Learning (PSL), builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. The generated rule base can be used to control the robot and to predict expected sensor events in response to executed actions. The rule base can be trained under different contexts, represented as fuzzy sets. In the present work, contexts are used to represent different behaviors. Several behaviors can in this way be stored in the same rule base and partly share information. The context that best matches present circumstances can be identified using the predictive model and the robot can in this way automatically identify the most suitable behavior for precent circumstances. The performance of PSL as a method for LFD is evaluated with, and without, contextual information. The results indicate that PSL without contexts can learn and reproduce simple behaviors. The system also successfully identifies the most suitable context in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contexts. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

    Fulltekst (pdf)
    fulltext
  • 28. Billing, Erik
    et al.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Simultaneous recognition and reproduction of demonstrated behavior2015Inngår i: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, s. 43-53Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: (1) to identify which behavior that best matches current context and (2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction.

  • 29.
    Bliek, Adna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Bensch, Suna
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    How Can a Robot Trigger Human Backchanneling?2020Inngår i: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2020, s. 96-103Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In human communication, backchanneling is an important part of the natural interaction protocol. The purpose is to signify the listener’s attention, understanding, agreement, or to indicate that a speaker should go on talking. While the effects of backchanneling robots on humans have been investigated, studies of how and when humans backchannel to talking robots is poorly studied. In this paper we investigate how the robot’s behavior as a speaker affects a human listener’s backchanneling behavior. This is interesting in Human-Robot Interaction since backchanneling between humans has been shown to support more fluid interactions, and human-robot interaction would therefore benefit from mimicking this human communication feature. The results show that backchanneling increases when the robot exhibits backchannel-inviting cues such as pauses and gestures. Furthermore, clear differences between how a human backchannels to another human and to a robot are shown.

  • 30.
    Bliek, Adna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Bensch, Suna
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    How Can a Robot Trigger Human Backchanneling?2020Inngår i: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2020, s. 96-103Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In human communication, backchanneling is an important part of the natural interaction protocol. The purpose is to signify the listener's attention, understanding, agreement, or to indicate that a speaker should go on talking. While the effects of backchanneling robots on humans have been investigated, studies of how and when humans backchannel to talking robots is poorly studied. In this paper we investigate how the robot's behavior as a speaker affects a human listener's backchanneling behavior. This is interesting in Human -Robot Interaction since backchanneling between humans has been shown to support more fluid interactions, and human -robot interaction would therefore benefit from mimicking this human communication feature. The results show that backchanneling increases when the robot exhibits backchannel-inviting cues such as pauses and gestures. Furthermore, clear differences between how a human backchannels to another human and to a robot are shown.

  • 31. Bontsema, J.
    et al.
    Hemming, J.
    Pekkeriet, E.
    Saeys, W.
    Edan, Y.
    Shapiro, A.
    Hočevar, M.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Oberti, R.
    Armada, M.
    Ulbrich, H.
    Baur, J.
    Debilde, B.
    Best, S.
    Evain, S.
    Gauchel, W.
    Ringdahl, Ola
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    CROPS: high tech agricultural robots2014Konferansepaper (Annet vitenskapelig)
  • 32. Bontsema, Jan
    et al.
    Hemming, Jochen
    Pekkeriet, Erik
    Saeys, Wouter
    Edan, Yael
    Shapiro, Amir
    Hočevar, Marko
    Oberti, Roberto
    Armada, Manuel
    Ulbrich, Heinz
    Baur, Jörg
    Debilde, Benoit
    Best, Stanley
    Evain, Sébastien
    Gauchel, Wolfgang
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Ringdahl, Ola
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    CROPS: Clever Robots for Crops2015Inngår i: Engineering & Technology Reference, ISSN 2056-4007, Vol. 1, nr 1Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In the EU-funded CROPS project robots are developed for site-specific spraying and selective harvesting of fruit and fruit vegetables. The robots are being designed to harvest crops, such as greenhouse vegetables, apples, grapes and for canopy spraying in orchards and for precision target spraying in grape vines. Attention is paid to the detection of obstacles for autonomous navigation in a safe way in plantations and forests. For the different applications, platforms were built. Sensing systems and vision algorithms have been developed. For software the Robot Operating System is used. A 9 degrees of freedom manipulator was designed and tested for sweet-pepper harvesting, apple harvesting and in close range spraying. For the applications different end-effectors were designed and tested. For sweet pepper a platform that can move in between the crop rows on the common greenhouse rail system which also serves as heating pipes was built. The apple harvesting platform is based on a current mechanical grape harvester. In discussion with growers so-called ‘walls of fruit trees’ have been designed which bring robots closer to the practice. A canopy-optimised sprayer has been designed as a trailed sprayer with a centrifugal blower. All the applications have been tested under practical conditions.

  • 33.
    Edström, Filip
    et al.
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    de Luna, Xavier
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Robot causal discovery aided by human interaction2023Inngår i: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2023, s. 1731-1736Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Causality is relatively unexplored in robotics even if it is highly relevant, in several respects. In this paper, we study how a robot’s causal understanding can be improved by allowing the robot to ask humans causal questions. We propose a general algorithm for selecting direct causal effects to ask about, given a partial causal representation (using partially directed acyclic graphs, PDAGs) obtained from observational data. We propose three versions of the algorithm inspired by different causal discovery techniques, such as constraint-based, score-based, and interventions. We evaluate the versions in a simulation study and our results show that asking causal questions improves the causal representation over all simulated scenarios. Further, the results show that asking causal questions based on PDAGs discovered from data provides a significant improvement compared to asking questions at random, and the version inspired by score-based techniques performs particularly well over all simulated experiments.

  • 34.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Learning High-Level Behaviors From Demonstration Through Semantic Networks2012Inngår i: Proceedings of 4th International Conference on Agents and Artificial Intelligence, 2012, s. 419-426Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In this paper we present an approach for high-level behavior recognition and selection integrated with alow-level controller to help the robot to learn new skills from demonstrations. By means of SemanticNetwork as the core of the method, the robot gains the ability to model the world with concepts and relatethem to low-level sensory-motor states. We also show how the generalization ability of Semantic Networkscan be used to extend learned skills to new situations.

    Fulltekst (pdf)
    fulltext
  • 35.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration2013Inngår i: 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2013, s. 67-74Konferansepaper (Fagfellevurdert)
    Abstract [en]

    This paper gives a brief overview of challenges indesigning cognitive architectures for Learning fromDemonstration. By investigating features and functionality ofsome related architectures, we propose a modular architectureparticularly suited for sequential learning high-levelrepresentations of behaviors. We head towards designing andimplementing goal based imitation learning that not only allowsthe robot to learn necessary conditions for executing particularbehaviors, but also to understand the intents of the tutor andreproduce the same behaviors accordingly.

  • 36.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Jevtić, Aleksandar
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations2015Inngår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 65, s. 24-39Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In domains where robots carry out human’s tasks, the ability to learn new behaviors easily and quickly plays an important role. Two major challenges with Learning from Demonstration (LfD) are to identify what information in a demonstrated behavior requires attention by the robot, and to generalize the learned behavior such that the robot is able to perform the same behavior in novel situations. The main goal of this paper is to incorporate Ant Colony Optimization (ACO) algorithms into LfD in an approach that focuses on understanding tutor's intentions and learning conditions to exhibit a behavior. The proposed method combines ACO algorithms with semantic networks and spreading activation mechanism to reason and generalize the knowledge obtained through demonstrations. The approach also provides structures for behavior reproduction under new circumstances. Finally, applicability of the system in an object shape classification scenario is evaluated.

  • 37.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Thomas, Hellström
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Applying a priming mechanism for intention recognition in shared control2015Inngår i: 2015 IEEE international multi-disciplinary conference on cognitive methods in situation awareness and decision support (CogSIMA), IEEE, 2015, s. 35-41Konferansepaper (Fagfellevurdert)
    Abstract [en]

    In many robotics shared control applications, users are forced to focus hard on the robot due to the task’s high sensitivity or the robot’s misunderstanding of the user’s intention. This brings frustration and dissatisfaction to the user and reduces overall efficiency. The user’s intention is sometimes unclear and hard to identify without some kind of bias in the identification process. In this paper, we present a solution in which an attentional mechanism helps the robot to recognize the user’s intention. The solution uses a priming mechanism and parameterized behavior primitives to support intention recognition and improve shared control for teleoperation tasks.

  • 38.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Thomas, Hellström
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    On the Similarities Between Control Based and Behavior Based Visual Servoing2015Inngår i: Proceedings of the 30th Annual ACM Symposium on Applied Computing, New York: Association for Computing Machinery (ACM), 2015, s. 320-326Konferansepaper (Fagfellevurdert)
    Abstract [en]

    Abstract Robotics is tightly connected to both artificial intelligence (AI) and control theory. Both AI and control based robotics are active and successful research areas, but research is often conducted by well separated communities. In this paper, we compare the two approaches in a case study for the design of a robot that should move its arm towards an object with the help of camera data. The control based approach is a model-free version of Image Based Visual Servoing (IBVS), which is based on mathematical modeling of the sensing and motion task. The AI approach, here denoted Behavior-Based Visual Servoing (BBVS), contains elements that are biologically plausible and inspired by schema-theory. We show how the two approaches lead to very similar solutions, even identical given a few simplifying assumptions. This similarity is shown both analytically and numerically. However, in a simple picking task with a 3 DoF robot arm, BBVS shows significantly higher performance than the IBVS approach, partly because it contains more manually tuned parameters. While the results obviously do not apply to all tasks and solutions, it illustrates both strengths and weaknesses with both approaches, and how they are tightly connected and share many similarities despite very different starting points and methodologies.

  • 39.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Thomas, Hellström
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Priming as a means to reduce ambiguity in learning from demonstration2016Inngår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 8, nr 1, s. 5-19Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Learning from Demonstration (LfD) is an established robot learning technique by which a robot acquires a skill by observing a human or robot teacher demonstrating the skill. In this paper we address the ambiguity involved in inferring the intention with one or several demonstrations. We suggest a method based on priming, and a memory model with similarities to human learning. Conducted experiments show that the developed method leads to faster and improved understanding of the intention with a demonstration by reducing ambiguity.

  • 40.
    Hamrin, Maria
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för fysik.
    Norqvist, Patrik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för fysik.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Andre, Mats
    Eriksson, AI
    A statistical study of ion energization at 1700 km in the auroral region2002Inngår i: Annales Geophysicae, ISSN 0992-7689, E-ISSN 1432-0576, Vol. 20, nr 12, s. 1943-1958Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a comprehensive overview of several potentially relevant causes for the oxygen energization in the auroral region. Data from the Freja satellite near 1700 km altitude are used for an unconditional statistical investigation. The data are obtained in the Northern Hemisphere during 21 months in the declining phase of the solar cycle. The importance of various wave types for the ion energization is statistically studied. We also investigate the correlation of ion heating with precipitating protons, accelerated auroral electrons, suprathermal electron bursts, the electron density variations, K-P index and solar illumination of the nearest conjugate ionosphere. We find that sufficiently strong broadband ELF waves, electromagnetic ion cyclotron waves, and waves around the lower hybrid frequency are foremost associated with the ion heating. However, magnetosonic waves, with a sharp, lower frequency cutoff just below the proton gyrofrequency, are not found to contribute to the ion heating. In the absence of the first three wave emissions, transversely energized ions are rare. These wave types are approximately equally efficient in heating the ions, but we find that the main source for the heating is broadband ELF waves, since they are most common in the auroral region. We have also observed that the conditions for ion heating are more favourable for smaller ratios of the spectral densities S-E/S-B of the broadband ELF waves at the oxygen gyrofrequency.

    Fulltekst (pdf)
    fulltext
  • 41.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    A random walk through the stock market1998Annet (Annet vitenskapelig)
  • 42.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    AI and its consequences for the written word2023Inngår i: Frontiers in Artificial Intelligence, E-ISSN 2624-8212, Vol. 6, artikkel-id 1326166Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The latest developments of chatbots driven by Large Language Models (LLMs), more specifically ChatGPT, have shaken the foundations of how text is created, and may drastically reduce and change the need, ability, and valuation of human writing. Furthermore, our trust in the written word is likely to decrease, as an increasing proportion of all written text will be AI-generated – and potentially incorrect. In this essay, I discuss these implications and possible scenarios for us humans, and for AI itself.

    Fulltekst (pdf)
    fulltext
  • 43.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    An intelligent rollator with steering by braking2012Rapport (Annet vitenskapelig)
    Abstract [en]

    Walking aids such as rollators help a lot of individuals to maintain mobility and independence. While these devices clearly improve balance and mobility they also lead to increased risk of falling accidents. With an increasing proportion of elderly in the population, there is a clear need for improving these devices. This paper describes ongoing work on the development of ROAR - an intelligent rollator that can help users with limited vision, cognition or motoric abilities. Automatic detection and avoidance of obstacles such as furniture and doorposts simplify usage in cluttered indoor environments. For outdoors usage, the design includes a function to avoid curbs and other holes that may otherwise cause serious accidents. Ongoing work includes a novel approach to compensate for sideway drift that occur both indoors and outdoors for users with certain types of cognitive or motoric disabilities. Also the control mechanism differs from other similar designs. Steering is achieved by activating electrical brakes instead of turning the front wheels. Furthermore, cheap infrared sensors are used instead of a laser scanner for detection of objects.  Altogether, the design is believed to lead to increased acceptability, lower price and safer operation.

    Fulltekst (pdf)
    fulltext
  • 44.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.