umu.sePublikationer
Ändra sökning
Avgränsa sökresultatet
1 - 31 av 31
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Backman, Anders
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Bodin, Kenneth
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Bucht, Gösta
    Umeå universitet, Medicinska fakulteten, Institutionen för samhällsmedicin och rehabilitering, Geriatrik.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Maxhall, Marcus
    Umeå universitet, Medicinska fakulteten, Institutionen för samhällsmedicin och rehabilitering, Geriatrik.
    Pederson, Thomas
    Innovative Communication Group, IT University of Copenhagen.
    Sjölie, Daniel
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Sondell, Björn
    Umeå universitet, Medicinska fakulteten, Institutionen för samhällsmedicin och rehabilitering, Geriatrik.
    Surie, Dipak
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    easyADL – Wearable Support System for Independent Life despite Dementia2006Ingår i: ACM CHI 2006 Workshop onDesigning Technology for People with Cognitive Impairments, 2006Konferensbidrag (Refereegranskat)
    Abstract [en]

    This position paper outlines the easyADL project, a two-year project investigating the possibility of using wearable technology to assist people suffering the dementia disease in performing Activities of Daily Living (ADL). An introduction to the egocentric interaction modeling framework is provided and the virtual reality based development methodology is discusssed.

  • 2.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Behavior recognition for learning from demonstration2010Ingår i: Proceedings of IEEE International Conference on Robotics and Automation / [ed] Nancy M. Amato et. al, 2010, s. 866-872Konferensbidrag (Refereegranskat)
    Abstract [en]

    Two methods for behavior recognition are presented and evaluated. Both methods are based on the dynamic temporal difference algorithm Predictive Sequence Learning (PSL) which has previously been proposed as a learning algorithm for robot control. One strength of the proposed recognition methods is that the model PSL builds to recognize behaviors is identical to that used for control, implying that the controller (inverse model) and the recognition algorithm (forward model) can be implemented as two aspects of the same model. The two proposed methods, PSLE-Comparison and PSLH-Comparison, are evaluated in a Learning from Demonstration setting, where each algorithm should recognize a known skill in a demonstration performed via teleoperation. PSLH-Comparison produced the smallest recognition error. The results indicate that PSLH-Comparison could be a suitable algorithm for integration in a hierarchical control system consistent with recent models of human perception and motor control.

  • 3.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Model-free learning from demonstration2010Ingår i: ICAART 2010 - Proceedings of the international conference on agents and artificial intelligence:  volume 2 / [ed] Joaquim Filipe, Ana LN Fred, Bernadette Sharp, Portugal: INSTICC , 2010, s. 62-71Konferensbidrag (Refereegranskat)
    Abstract [en]

    A novel robot learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated. PSL is a model-free prediction algorithm inspired by the dynamic temporal difference algorithm S-Learning. While S-Learning has previously been applied as a reinforcement learning algorithm for robots, PSL is here applied to a Learning from Demonstration problem. The proposed algorithm is evaluated on four tasks using a Khepera II robot. PSL builds a model from demonstrated data which is used to repeat the demonstrated behavior. After training, PSL can control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. PSL was able to successfully learn and repeat the first three (elementary) tasks, but it was unable to successfully repeat the fourth (composed) behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

  • 4.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Predictive learning from demonstration2011Ingår i: Agents and artificial Intelligence: Second International Conference, ICAART 2010, Valencia, Spain, January 22-24, 2010. Revised Selected Papers / [ed] Filipe, Joaquim, Fred, Ana, Sharp, Bernadette, Berlin: Springer Verlag , 2011, 1, s. 186-200Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    A model-free learning algorithm called Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL is inspired by several functional models of the brain. It constructs sequences of predictable sensory-motor patterns, without relying on predefined higher-level concepts. The algorithm is demonstrated on a Khepera II robot in four different tasks. During training, PSL generates a hypothesis library from demonstrated data. The library is then used to control the robot by continually predicting the next action, based on the sequence of passed sensor and motor events. In this way, the robot reproduces the demonstrated behavior. PSL is able to successfully learn and repeat three elementary tasks, but is unable to repeat a fourth, composed behavior. The results indicate that PSL is suitable for learning problems up to a certain complexity, while higher level coordination is required for learning more complex behaviors.

  • 5.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Robot learning from demonstration using predictive sequence learning2011Ingår i: Robotic systems: applications, control and programming / [ed] Ashish Dutta, Kanpur, India: IN-TECH, 2011, s. 235-250Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    In this chapter, the prediction algorithm Predictive Sequence Learning (PSL) is presented and evaluated in a robot Learning from Demonstration (LFD) setting. PSL generates hypotheses from a sequence of sensory-motor events. Generated hypotheses can be used as a semi-reactive controller for robots. PSL has previously been used as a method for LFD, but suffered from combinatorial explosion when applied to data with many dimensions, such as high dimensional sensor and motor data. A new version of PSL, referred to as Fuzzy Predictive Sequence Learning (FPSL), is presented and evaluated in this chapter. FPSL is implemented as a Fuzzy Logic rule base and works on a continuous state space, in contrast to the discrete state space used in the original design of PSL. The evaluation of FPSL shows a significant performance improvement in comparison to the discrete version of the algorithm. Applied to an LFD task in a simulated apartment environment, the robot is able to learn to navigate to a specific location, starting from an unknown position in the apartment.

  • 6.
    Billing, Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Simultaneous control and recognition of demonstrated behavior2011Rapport (Övrigt vetenskapligt)
    Abstract [en]

    A method for Learning from Demonstration (LFD) is presented and evaluated on a simulated Robosoft Kompai robot. The presented algorithm, called Predictive Sequence Learning (PSL), builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. The generated rule base can be used to control the robot and to predict expected sensor events in response to executed actions. The rule base can be trained under different contexts, represented as fuzzy sets. In the present work, contexts are used to represent different behaviors. Several behaviors can in this way be stored in the same rule base and partly share information. The context that best matches present circumstances can be identified using the predictive model and the robot can in this way automatically identify the most suitable behavior for precent circumstances. The performance of PSL as a method for LFD is evaluated with, and without, contextual information. The results indicate that PSL without contexts can learn and reproduce simple behaviors. The system also successfully identifies the most suitable context in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contexts. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction. 

  • 7. Billing, Erik
    et al.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Simultaneous recognition and reproduction of demonstrated behavior2015Ingår i: Biologically Inspired Cognitive Architectures, ISSN 2212-683X, Vol. 12, s. 43-53Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Predictions of sensory-motor interactions with the world is often referred to as a key component in cognition. We here demonstrate that prediction of sensory-motor events, i.e., relationships between percepts and actions, is sufficient to learn navigation skills for a robot navigating in an apartment environment. In the evaluated application, the simulated Robosoft Kompai robot learns from human demonstrations. The system builds fuzzy rules describing temporal relations between sensory-motor events recorded while a human operator is tele-operating the robot. With this architecture, referred to as Predictive Sequence Learning (PSL), learned associations can be used to control the robot and to predict expected sensor events in response to executed actions. The predictive component of PSL is used in two ways: (1) to identify which behavior that best matches current context and (2) to decide when to learn, i.e., update the confidence of different sensory-motor associations. Using this approach, knowledge interference due to over-fitting of an increasingly complex world model can be avoided. The system can also automatically estimate the confidence in the currently executed behavior and decide when to switch to an alternate behavior. The performance of PSL as a method for learning from demonstration is evaluated with, and without, contextual information. The results indicate that PSL without contextual information can learn and reproduce simple behaviors, but fails when the behavioral repertoire becomes more diverse. When a contextual layer is added, PSL successfully identifies the most suitable behavior in almost all test cases. The robot's ability to reproduce more complex behaviors, with partly overlapping and conflicting information, significantly increases with the use of contextual information. The results support a further development of PSL as a component of a dynamic hierarchical system performing control and predictions on several levels of abstraction.

  • 8.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Learning High-Level Behaviors From Demonstration Through Semantic Networks2012Ingår i: Proceedings of 4th International Conference on Agents and Artificial Intelligence, 2012, s. 419-426Konferensbidrag (Refereegranskat)
    Abstract [en]

    In this paper we present an approach for high-level behavior recognition and selection integrated with alow-level controller to help the robot to learn new skills from demonstrations. By means of SemanticNetwork as the core of the method, the robot gains the ability to model the world with concepts and relatethem to low-level sensory-motor states. We also show how the generalization ability of Semantic Networkscan be used to extend learned skills to new situations.

  • 9.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Towards Goal Based Architecture Design for Learning High-Level Representation of Behaviors from Demonstration2013Ingår i: 2013 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2013, s. 67-74Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper gives a brief overview of challenges indesigning cognitive architectures for Learning fromDemonstration. By investigating features and functionality ofsome related architectures, we propose a modular architectureparticularly suited for sequential learning high-levelrepresentations of behaviors. We head towards designing andimplementing goal based imitation learning that not only allowsthe robot to learn necessary conditions for executing particularbehaviors, but also to understand the intents of the tutor andreproduce the same behaviors accordingly.

  • 10.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Jevtić, Aleksandar
    Hellström, Thomas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Applying Ant Colony Optimization Algorithms for High-Level Behavior Learning and Reproduction from Demonstrations2015Ingår i: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 65, s. 24-39Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In domains where robots carry out human’s tasks, the ability to learn new behaviors easily and quickly plays an important role. Two major challenges with Learning from Demonstration (LfD) are to identify what information in a demonstrated behavior requires attention by the robot, and to generalize the learned behavior such that the robot is able to perform the same behavior in novel situations. The main goal of this paper is to incorporate Ant Colony Optimization (ACO) algorithms into LfD in an approach that focuses on understanding tutor's intentions and learning conditions to exhibit a behavior. The proposed method combines ACO algorithms with semantic networks and spreading activation mechanism to reason and generalize the knowledge obtained through demonstrations. The approach also provides structures for behavior reproduction under new circumstances. Finally, applicability of the system in an object shape classification scenario is evaluated.

  • 11.
    Fonooni, Benjamin
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Thomas, Hellström
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Priming as a means to reduce ambiguity in learning from demonstration2016Ingår i: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 8, nr 1, s. 5-19Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Learning from Demonstration (LfD) is an established robot learning technique by which a robot acquires a skill by observing a human or robot teacher demonstrating the skill. In this paper we address the ambiguity involved in inferring the intention with one or several demonstrations. We suggest a method based on priming, and a memory model with similarities to human learning. Conducted experiments show that the developed method leads to faster and improved understanding of the intention with a demonstration by reducing ambiguity.

  • 12.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Studies in knowledge representation: modeling change - the frame problem : pictures and words1985Doktorsavhandling, monografi (Övrigt vetenskapligt)
    Abstract [en]

    In two studies, the author attempts to develop a general symbol theoretical approach to knowledge representation.

    The first study, Modeling change -

    the frame problem, critically examines the - so far unsuccessful - attempts to solve the notorious frame problem. By discussing and analyzing a number of related problems - the prediction problem, the revision problem, the qualification problem, and the book-keeping problem - the frame problem is distinguished as the problem of finding a representational form permitting a changing, complex world to be efficiently and adequately represented. This form, it is argued, is dictated by the metaphysics of the problem world, the fundamental form of the symbol system we humans use in rightly characterizing the world.

    In the second study, Pictures and words, the symbol theoretical approach is made more explicit. The subject Is the distinction between pictorial (non-linguistic, non-propositional, analogical, "direct") representation and verbal (linguistic, propositional) representation, and the further implications of this distinction. The study focuses on pictorial representation, which has received little attention compared to verbal representation. Observations, ideas, and theories in AI, cognitive psychology, and philosophy are critically examined. The general conclusion is that there is as yet no cogent and mature theory of pictorial representation that gives good support to computer applications. The philosophical symbol theory of Nelson Goodman is found to be the most thoroughly developed and most congenial with the aims and methods of AI. Goodman's theory of pictorial representation, however, in effect excludes computers from the use of pictures. In the final chapter, an attempt is made to develop Goodman's analysis of pictures further turning it into a theory useful to AI. The theory outlined builds on Goodman's concept of exemplification. The key idea is that a picture is a model of a description that has the depicted object as its standard model. One consequence Is that pictorial and verbal forms of representation are seen less as competing alternatives than as complementary forms of representation mutually supporting and depending on each other.

  • 13.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    The dynamism of information access for a mobile agent in a dynamic setting and some of its implications2011Ingår i: Proceedings IACAP 2011: The computational turn: Past, presents, futures? / [ed] Charles Ess & Ruth Hagengruber, Münster: Verlagshaus Monsenstein und Vannerdat OHG , 2011, s. 94-96Konferensbidrag (Refereegranskat)
  • 14.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    The ubiquitous button2014Ingår i: interactions, ISSN 1072-5520, E-ISSN 1558-3449, Vol. 21, nr 3, s. 26-33Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Why are buttons so common in contemporary artifacts and yet so often a source of irritation and trouble? Could we, by reinstating the natural mode of operation with traditional mechanical systems, dispel our confusions and remedy our confirmation deficiencies? Probably not.

  • 15.
    Janlert, Lars-Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Stolterman, Erik
    School of Informatics and Computing, Indiana University.
    Complex interaction2010Ingår i: ACM Transactions on Computer-Human Interaction, ISSN 1073-0516, E-ISSN 1557-7325, Vol. 17, nr 2Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    An almost explosive growth of complexity puts pressure on people in their everyday doings. Digital artifacts and systems are at the core of this development. How should we handle complexity aspects when designing new interactive devices and systems? In this article we begin an analysis of interaction complexity. We portray different views of complexity; we explore not only negative aspects of complexity, but also positive, making a case for the existence of benign complexity. We argue that complex interaction is not necessarily bad, but designers need a deeper understanding of interaction complexity and need to treat it in a more intentional and thoughtful way. We examine interaction complexity as it relates to different loci of complexity: internal,external, and mediated complexity. Our purpose with these analytical exercises is to pave the way for design that is informed by a more focused and precise understanding of interaction complexity.

  • 16.
    Janlert, Lars-Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Stolterman, Erik
    Indiana University, Bloomington.
    Faceless Interaction - A Conceptual Examination of the Notion of Interface: past, present and future2015Ingår i: Human-Computer Interaction, ISSN 0737-0024, E-ISSN 1532-7051, Vol. 30, nr 6, s. 507-539Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In the middle of the present struggle to keep interaction complexity in check as artifact complexity continues to rise and the technical possibilities to interact multiply, the notion of interface is scrutinized. First, a limited number of previous interpretations or thought styles of the notion are identified and discussed. This serves as a framework for an analysis of the current situation with regard to complexity, control, and interaction, leading to a realization of the crucial role of surface in contemporary understanding of interaction. The potential of faceless interaction, interaction that transcends traditional reliance on surfaces, is then examined and discussed, liberating possibilities as well as complicating effects and dangers are pointed out, ending with a sketch of a possibly emerging new thought style.

  • 17.
    Janlert, Lars-Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Stolterman, Erik
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för informatik.
    The character of things1997Ingår i: Design Studies, ISSN 0142-694X, E-ISSN 1872-6909, Vol. 18, nr 3, s. 297-314Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    People, as well as things, appear to have character--high-level attributes that help us understand and relate to them. A character is a coherent set of characteristics and attributes that apply to appearance and behaviour alike, cutting across different functions, situations and value systems--esthetical, technical, ethical--providing support for anticipation, interpretation and interaction. Consistency in character may become more important than ever in the increasingly complex artifacts of our computer-supported future.

  • 18.
    Janlert, Lars-Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Stolterman, Erik
    The Meaning of Interactivity: Some Proposals for Definitions and Measure2017Ingår i: Human-Computer Interaction, ISSN 0737-0024, E-ISSN 1532-7051, Vol. 32, nr 3, s. 103-138Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    New interactive applications, artifacts, and systems are constantly being added to our environments, and there are some concerns in the human-computer interaction research community that increasing interactivity might not be just to the good. But what is it that is supposed to be increasing, and how could we determine whether it is? To approach these issues in a systematic and analytical fashion, relying less on common intuitions and more on clearly defined concepts and when possible quantifiable properties, we take a renewed look at the notion of interactivity and related concepts. The main contribution of this article is a number of definitions and terms, and the beginning of an attempt to frame the conditions of interaction and interactivity. Based on this framing, we also propose some possible approaches for how interactivity can be measured.

  • 19.
    Janlert, Lars-Erik
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Stolterman, Erik
    School of Informatics and Computing, Indiana University Bloomington.
    Things that keep us busy: the elements of interaction2017Bok (Refereegranskat)
    Abstract [en]

    We are surrounded by interactive devices, artifacts, and systems. The general assumption is that interactivity is good -- that it is a positive feature associated with being modern, efficient, fast, flexible, and in control. Yet there is no very precise idea of what interaction is and what interactivity means. In this book, Lars-Erik Janlert and Erik Stolterman investigate the elements of interaction and how they can be defined and measured. They focus on interaction with digital artifacts and systems but draw inspiration from the broader, everyday sense of the word.

    Viewing the topic from a design perspective, Janlert and Stolterman take as their starting point the interface, which is designed to implement the interaction. They explore how the interface has changed over time, from a surface with knobs and dials to clickable symbols to gestures to the absence of anything visible. Janlert and Stolterman examine properties and qualities of designed artifacts and systems, primarily those that are open for manipulation by designers, considering such topics as complexity, clutter, control, and the emergence of an expressive-impressive style of interaction. They argue that only when we understand the basic concepts and terms of interactivity and interaction will we be able to discuss seriously its possible futures.

  • 20.
    Pederson, Thomas
    et al.
    IT University of Copenhagen.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Surie, Dipak
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    A situative space model for mobile mixed-reality computing2011Ingår i: IEEE pervasive computing, ISSN 1536-1268, E-ISSN 1558-2590, Vol. 10, nr 4, s. 73-83Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article proposes a situative space model that links the physical and virtual realms and sets the stage for complex human-computer interaction defined by what a human agent can see, hear, and touch, at any given point in time.

  • 21.
    Pederson, Thomas
    et al.
    Innovative Communication Group, IT University of Copenhagen.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Surie, Dipak
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Towards a model for egocentric interaction with physical and virtual objects2010Ingår i: Proceedings of the 6th Nordic Conference on Human-Computer Interaction:: Extending Boundaries, 2010, New York, USA: ACM Press, 2010, s. 755-758Konferensbidrag (Refereegranskat)
    Abstract [en]

    Designers of mobile context-aware systems are struggling with the problem of conceptually incorporating the real world into the system design. We present a body-centric modeling framework (as opposed to device-centric) that incorporates physical and virtual objects of interest on the basis of proximity and human perception, framed in the context of an emerging "egocentric" interaction paradigm.

  • 22.
    Pederson, Thomas
    et al.
    Innovative Communication Group, IT University of Copenhagen.
    Piccinno, Antonio
    Dip. di Informatica, Università degli Studi di Bari.
    Surie, Dipak
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Ardito, Carmelo
    Dip. di Informatica, Università degli Studi di Bari.
    Caporusso, Nicholas
    Dip. di Informatica, Università degli Studi di Bari.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Framing the Next-Generation ‘Desktop’ using Proximity and Human Perception2008Ingår i: ACM CHI 2008 Conference Workshop on User Interface Description Languages for Next-Generation User Interfaces, 2008Konferensbidrag (Refereegranskat)
    Abstract [en]

    Personal computing, and therefore Human-Computer Interaction (HCI), is becoming a seamlessly integrated part of everyday activity down to the point where "computing" is inseparable from "activity". A modelling problem occurs in these emerging mobile and ubiquitous computing situations because it is hard to determine the spatial and operational limits of an ongoing activity, for the human performing the activity, for the computer system monitoring and/or supporting it, as well as for the modeller observing it. Also, it is an open issue how to model the causal relations between physical (real world) and virtual (digital world) phenomena that these "intelligent environments" can be programmed to maintain, whether defined by software engineers or the end-users themselves. We propose a modeling framework that addresses the above mentioned issues and present our initial attempts to create a User Interface Description Language (UIDL) based on the framework.

  • 23.
    Sjölie, Daniel
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Bodin, Kenneth
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Elgh, Eva
    Umeå universitet, Medicinska fakulteten, Institutionen för samhällsmedicin och rehabilitering, Geriatrik.
    Eriksson, Johan
    Umeå universitet, Medicinska fakulteten, Institutionen för integrativ medicinsk biologi (IMB).
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Nyberg, Lars
    Umeå universitet, Medicinska fakulteten, Institutionen för integrativ medicinsk biologi (IMB).
    Effects of interactivity and 3D-motion on mental rotation brain activity in an immersive virtual environment2010Ingår i: Proceedings of the 28th international conference on Human factors in computing systems, Association for Computing Machinery (ACM), 2010, s. 869-878Konferensbidrag (Refereegranskat)
    Abstract [en]

    The combination of virtual reality (VR) and brain measurements is a promising development of HCI, but the maturation of this paradigm requires more knowledge about how brain activity is influenced by parameters of VR applications. To this end we investigate the influence of two prominent VR parameters, 3d-motion and interactivity, while brain activity is measured for a mental rotation task, using functional MRI (fMRI). A mental rotation network of brain areas is identified, matching previous results. The addition of interactivity increases the activation in core areas of this network, with more profound effects in frontal and preparatory motor areas. The increases from 3d-motion are restricted to primarily visual areas. We relate these effects to emerging theories of cognition and potential applications for brain-computer interfaces (BCIs). Our results demonstrate one way to provoke increased activity in task-relevant areas, making it easier to detect and use for adaptation and development of HCI.

  • 24.
    Sjölie, Daniel
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Bodin, Kenneth
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Eriksson, Johan
    Umeå universitet, Medicinska fakulteten, Institutionen för integrativ medicinsk biologi (IMB).
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Using brain imaging to assess interaction in immersive VR2009Ingår i: Challenges in the evaluation of usability and user experience in reality based interaction / [ed] Georgios Christou, Effie Lai-Chong Law, William Green, & Kasper Hornbæk, Boston, MA, USA: ACM , 2009, s. 23-27Konferensbidrag (Refereegranskat)
    Abstract [en]

    We have developed a system where the combination of functional brain imaging (fMRI) and Virtual Reality (VR) can be used to study and evaluate user experience based on brain activation and models of cognitive neuroscience. The ability to study the brain during natural interaction with an (ecologically valid) environment has great potential for several areas of research and development, including evaluation of Reality-Based Interaction (RBI). The RBI concept of tradeoffs is of particular interest since we want to further explore the relation between how the brain works with an accepted reality and what happens when this reality is disrupted. We present the system with an overview of conducted studies to illustrate capabilities and feasibility. In particular, feasibility is supported by the fact that the brain activations seen in these studies match expectations based on existing literature. Further discussion elaborates on the relation to RBI and evaluation; and finally some possible future work is presented.

  • 25.
    Sjölie, Daniel
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Mind the brain: The Potential of Basic Principles for Brain Function and Interaction2013Rapport (Övrigt vetenskapligt)
    Abstract [en]

    The prevalence and complexity of human-computer interaction makes a general understanding of human cognition important in design and development. Knowledge of some basic, relatively simple, principles for human brain function can significantly help such understanding in the interdisciplinary field of research and development Human-Computer Interaction (HCI) where no one can be an expert at everything. This paper explains a few such principles, relates them to human-computer interaction, and illustrates their potential. Most of these ideas are not new, but wider appreciation of the potential power of basic principles is only recently emerging as a result of developments within cognitive neuroscience and information theory. The starting point in this paper is the concept of mental simulation. Important and useful properties of mental simulations are explained using basic principles such as the free-energy principle. These concepts and their properties are further related to HCI by drawing on similarities to the theoretical framework of activity theory. Activity theory is particularly helpful to relate simple but abstract principles to real world applications and larger contexts. Established use of activity theory as a theoretical framework for HCI also exemplifies how theory may benefit HCI in general. Briefly, two basic principles that permeate this perspective are: the need for new skills and knowledge to build upon and fit into what is already there (grounding) and the importance of predictions and prediction errors (simulation).

  • 26.
    Surie, Dipak
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Pederson, Thomas
    Innovative Communication Group, IT University of Copenhagen.
    Roy, Dilip
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Egocentric interaction as a tool for designing ambient ecologies: the case of the easy ADL ecology2012Ingår i: Pervasive and Mobile Computing, ISSN 1574-1192, E-ISSN 1873-1589, Vol. 8, nr 4, s. 597-613Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The visions of ambient intelligence demand novel interaction paradigms that enable designers and system developers to frame and manage the dynamic and complex interaction between humans and environments populated with physical (real) and virtual (digital) objects of interest. So far, many proposed approaches have adhered to a device-centric stance when including virtual objects into the ambient ecology; a stance inherited from existing interaction paradigms for mobile and stationary interactive devices. In this article, we introduce egocentric interaction as an alternative approach, taking the human agent's body and mind as the center of reference. We show how this interaction paradigm has influenced both the conception and implementation of the easy ADL ecology, comprising of smart objects, a personal activity-centric middleware attempting to simplify interaction given available resources, ambient intelligence applications aimed at everyday activity support, and a human agent literally in the middle of it all.

  • 27.
    Surie, Dipak
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Jäckel, Florian
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Pederson, Thomas
    IT University of Copenhagen.
    Situative space tracking within smart environments2010Ingår i: Proceedings of the 6th International Conference on Intelligent Environments, IE 2010, Washington, DC, USA: IEEE Computer Society, 2010, s. 152-157Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper describes our efforts in modeling and tracking a human agent’s situation based on their possibilities to perceive and act upon objects (both physical and virtual) within smart environments. A Situative Space Model is proposed. WLAN signal-strength-based situative space tracking system that positions objects within individual situative spaces (without tracking their absolute positions) distributed across multiple modalities like vision, audio, and touch is presented. As a proof-of-concept, a preliminary evaluation of the tracking system was performed by two subjects within a living-laboratory smart home environment where a global tracking precision of 83.4% and a recall of 88.6% were obtained.

  • 28.
    Surie, Dipak
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Pederson, Thomas
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    A Smart Home Experience using Egocentric Interaction Design Principles2012Ingår i: 15TH IEEE International Conference On Computational Science And Engineering (CSE 2012) / 10TH IEEE/IFIP International Conference On Embedded And Ubiquitous Computing (EUC 2012), 2012, s. 656-665Konferensbidrag (Refereegranskat)
    Abstract [en]

    The landscape of ubiquitous computing comprising of numerous interconnected computing devices seamlessly integrated within everyday environments introduces a need to do research beyond human-computer interaction: in particular incorporate human-environment interaction. While the technological advancements have driven the field of ubiquitous computing, the ultimate focus should center on human agents and their experience in interacting with ubiquitous computing systems offering smart services. This paper describes egocentric interaction as a human body-centered interaction paradigm for framing human-environment interaction using proximity and human perception. A smart home environment capable of supporting physical-virtual activities and designed according to egocentric interaction principles is used for exploring the human experience it offers, yielding positive results as a proof of concept.

  • 29.
    Surie, Dipak
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Pederson, Thomas
    Innovative Communication Group, IT University of Copenhagen.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Human cognition as a foundation for the emerging egocentric interaction paradigm2012Ingår i: Human-Computer Interaction: The Agency Perspective, Berlin/Heidelberg: Springer Berlin/Heidelberg, 2012, s. 349-374Kapitel i bok, del av antologi (Refereegranskat)
    Abstract [en]

    This chapter presents an “egocentric interaction paradigm” (EIP) centered on human agents rather than on the notion of user. More specifically, this paradigm is based on perception, action, intention and attention capabilities and limitations of human agents. Traditional and emerging interaction paradigms are typically related to a specific computing environment, devices or human capabilities. The novelty of the proposed approach stems from aiming at developing a comprehensive and integrated theoretical approach, centered on individual human agent. Development in Human-Computer Interaction (HCI) has been closely related to the understanding and utilization of natural human skills and abilities. This work attempts to understand and model a human agent, and in particular their cognitive capabilities in facilitating HCI. The EIP is based on principles like situatedness and embodiment, the physical-virtual equity principle, and the proximity principle. A situative space model built upon our understanding of human cognition is described in detail, followed by our experience in exploring the egocentric interaction paradigm in the easy ADL home.

  • 30.
    Surie, Dipak
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Pederson, Thomas
    IT University of Copenhagen.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    The easy ADL home: A physical-virtual approach to domestic living2010Ingår i: Journal of Ambient Intelligence and Smart Environments, ISSN 1876-1364, E-ISSN 1876-1372, Vol. 2, nr 3, s. 287-310Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Smart environments worthy of the name need to capture, interpret, and support human activities that take place within their realms. Most existing efforts tend to focus on either real world activities or activities taking place in the virtual world accessed through digital devices. However, as digital computation continues to permeate our everyday real world environments, and as the border between physical and digital continues to blur for the human agents acting in these environments, we need system design approaches that can cope with human activities that span the physical-virtual gap. In this paper, we present such an approach and use it for designing a smart home intended to support Activities of Daily Living (ADL). The easy ADL home is designed based on a wearable personal server that runs a personal ADL support middleware and a set of computationally augmented everyday objects within the easy ADL home. An initial qualitative study of the system involving 20 subjects revealed a highly positive attitude (score 4.37 out of 5) towards the system's capability of co-locating and synchronizing physical and virtual events throughout the everyday activity scenarios, while classical usability aspects in particular related to the gesture-based input (score 2.89 out of 5) leaves room for improvement.

  • 31.
    Surie, Dipak
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Pederson, Thomas
    Innovative Communication Group, IT University of Copenhagen.
    Lagriffoul, Fabien
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Janlert, Lars-Erik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Sjölie, Daniel
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Activity recognition using an egocentric perspective of everyday objects2007Ingår i: Proceedings of the 4th International Conference on Ubiquitous Intelligence and Computing, Springer Berlin/Heidelberg, 2007, s. 246-257Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper presents an activity recognition approach based on the tracking of a specific human actor’s current object manipulation actions, complemented by two kinds of situational information: 1) the set of objects that are visually observable (inside the “observable space”) and 2) technically graspable (inside the “manipulable space”). This “egocentric” model is inspired by situated action theory and offers the advantage of not depending on technology for absolute positioning of neither the human nor the objects. Applied in an immersive Virtual Reality environment, the proposed activity recognition approach shows a recognition precision of 89% on the activity-level and 76% on the action-level among 10 everyday home activities.

1 - 31 av 31
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf