Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
From "Explainable AI" to "Graspable AI"
Arts and Communication Malmö University, Sweden.
College of Information Sciences and Technology, Pennsylvania State University, United States.
Machine Learning Visualization Lab Decisive Analytics Corporation, United States.
Department of Enginering, Aarhus University, Denmark.
Show others and affiliations
2021 (English)In: TEI 2021 - Proceedings of the 15th International Conference on Tangible, Embedded, and Embodied Interaction, Association for Computing Machinery, Inc , 2021, article id 69Conference paper, Published paper (Refereed)
Abstract [en]

Since the advent of Artificial Intelligence (AI) and Machine Learning (ML), researchers have asked how intelligent computing systems could interact with and relate to their users and their surroundings, leading to debates around issues of biased AI systems, ML black-box, user trust, user's perception of control over the system, and system's transparency, to name a few. All of these issues are related to how humans interact with AI or ML systems, through an interface which uses different interaction modalities. Prior studies address these issues from a variety of perspectives, spanning from understanding and framing the problems through ethics and Science and Technology Studies (STS) perspectives to finding effective technical solutions to the problems. But what is shared among almost all those efforts is an assumption that if systems can explain the how and why of their predictions, people will have a better perception of control and therefore will trust such systems more, and even can correct their shortcomings. This research field has been called Explainable AI (XAI). In this studio, we take stock on prior efforts in this area; however, we focus on using Tangible and Embodied Interaction (TEI) as an interaction modality for understanding ML. We note that the affordances of physical forms and their behaviors potentially can not only contribute to the explainability of ML systems, but also can contribute to an open environment for criticism. This studio seeks to both critique explainable ML terminology and to map the opportunities that TEI can offer to the HCI for designing more sustainable, graspable and just intelligent systems.

Place, publisher, year, edition, pages
Association for Computing Machinery, Inc , 2021. article id 69
Keywords [en]
Artificial Intelligence, Explainable AI, Interaction Design, Machine Learning, Tangible Embodied Interaction, TEI, XAI
National Category
Human Computer Interaction Other Engineering and Technologies
Identifiers
URN: urn:nbn:se:umu:diva-181723DOI: 10.1145/3430524.3442704ISI: 001180182600069Scopus ID: 2-s2.0-85102059863ISBN: 9781450382137 (electronic)OAI: oai:DiVA.org:umu-181723DiVA, id: diva2:1539682
Conference
15th International Conference on Tangible, Embedded, and Embodied Interaction, TEI 2021
Available from: 2021-03-25 Created: 2021-03-25 Last updated: 2025-04-24Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Wiberg, Mikael

Search in DiVA

By author/editor
Wiberg, Mikael
By organisation
Department of Informatics
Human Computer InteractionOther Engineering and Technologies

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 475 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf