umu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (3 of 3) Show all publications
Aler Tubella, A., Theodorou, A., Dignum, F. & Dignum, V. (2019). Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence: . Paper presented at 28th International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China, August 10-16, 2019..
Open this publication in new window or tab >>Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour
2019 (English)In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019Conference paper, Published paper (Other academic)
Abstract [en]

Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains which directly affect human well-being. However, if AI is to improve people’s lives, then people must be able to trust it, by being able to understand what the system is doing and why. Although transparency is often seen as the requirementin this case, realistically it might not always be possible, whereas the need to ensure that the system operates within set moral bounds remains.

In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass-Box’ around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems.

The explicit transformation of abstract moral values into concrete norms brings great benefits interms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value.

Keywords
artificial intelligence, ethics, verification, safety, transparency
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-159953 (URN)
Conference
28th International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China, August 10-16, 2019.
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 825619
Available from: 2019-06-11 Created: 2019-06-11 Last updated: 2019-06-13
Bryson, J. J. & Theodorou, A. (2019). How society can maintain human-centric artificial intelligence. In: Marja Toivonen and Eveliina Saari (Ed.), Human-centered digitalization and services: (pp. 305-323). Springer
Open this publication in new window or tab >>How society can maintain human-centric artificial intelligence
2019 (English)In: Human-centered digitalization and services / [ed] Marja Toivonen and Eveliina Saari, Springer, 2019, p. 305-323Chapter in book (Refereed)
Abstract [en]

Although not a goal universally held, maintaining human-centric artificial intelligence is necessary for society's long-term stability. Fortunately, the legal and technological problems of maintaining control are actually fairly well understood and amenable to engineering. The real problem is establishing the social and political will for assigning and maintaining accountability for artifacts when these artifacts are generated or used. In this chapter we review the necessity and tractability of maintaining human control and the mechanisms by which such control can be achieved. What makes the problem both most interesting and most threatening is that achieving consensus around any human-centered approach requires at least some measure of agreement on broad existential concerns.

Place, publisher, year, edition, pages
Springer, 2019
Series
Translational systems sciences, ISSN 2197-8832, E-ISSN 2197-8840 ; 19
Keywords
Systems artificial intelligence, Cognitive architectures, Ethics, Safety, Real-time visualisation
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-156689 (URN)10.1007/978-981-13-7725-9_16 (DOI)9789811377242 (ISBN)9789811377259 (ISBN)
Available from: 2019-02-24 Created: 2019-02-24 Last updated: 2019-08-28Bibliographically approved
Rotsidis, A., Theodorou, A. & Wortham, R. H. (2019). Robots That Make Sense: Transparent Intelligence Through Augmented Reality. In: : . Paper presented at 2019 IUI Workshop in Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies (IUI-ATEC). CEUR Workshop Proceedings
Open this publication in new window or tab >>Robots That Make Sense: Transparent Intelligence Through Augmented Reality
2019 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous robots can be difficult to understand by their develop-ers, let alone by end users. Yet, as they become increasingly integralparts of our societies, the need for affordable easy to use tools toprovide transparency grows. The rise of the smartphone and theimprovements in mobile computing performance have graduallyallowed Augmented Reality (AR) to become more mobile and afford-able. In this paper we review relevant robot systems architectureand propose a new software tool to provide robot transparencythrough the use of AR technology. Our new tool, ABOD3-AR pro-vides real-time graphical visualisation and debugging of a robot’sgoals and priorities as a means for both designers and end usersto gain a better mental model of the internal state and decisionmaking processes taking place within a robot. We also report onour on-going research programme and planned studies to furtherunderstand the effects of transparency to naive users and experts.

Place, publisher, year, edition, pages
CEUR Workshop Proceedings, 2019
Keywords
Responsible AI, Augmented Reality, Systems AI, Human-Robotics Interaction
National Category
Computer Vision and Robotics (Autonomous Systems) Human Computer Interaction
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-156709 (URN)
Conference
2019 IUI Workshop in Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies (IUI-ATEC)
Available from: 2019-02-25 Created: 2019-02-25 Last updated: 2019-04-05Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-9499-1535

Search in DiVA

Show all publications