umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The Glass Box Approach: Verifying Contextual Adherence to Values
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-8423-8029
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0001-7409-5813
2019 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Artificial Intelligence (AI) applications are beingused to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to be deployed safely, then people need to understand how the system is interpreting and whether it is adhering to the relevant moral values. Even though transparency is often seen as the requirement in this case, realistically it might notalways be possible or desirable, whereas the needto ensure that the system operates within set moral bounds remains.

In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass Box’ around the system by mapping moral values into contextual verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value(s) in a specific context. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems–from deep neural networks to agent-based systems–whereas by making the context explicit we exposethe different perspectives and frameworks that are taken into account when subsuming moral values into specific norms and functionalities. We present a modal logic formalisation of the Glass Box approach which is domain-agnostic, implementable, and expandable.

Place, publisher, year, edition, pages
2019.
Keywords [en]
artificial intelligence, safety, verification, ethics
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:umu:diva-160949OAI: oai:DiVA.org:umu-160949DiVA, id: diva2:1330967
Conference
AISafety 2019, Macao, August 11-12, 2019
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Available from: 2019-06-26 Created: 2019-06-26 Last updated: 2019-06-27

Open Access in DiVA

No full text in DiVA

Authority records BETA

Aler Tubella, AndreaDignum, Virginia

Search in DiVA

By author/editor
Aler Tubella, AndreaDignum, Virginia
By organisation
Department of Computing Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 197 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf