umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour
Umeå universitet.ORCID-id: 0000-0002-8423-8029
Umeå universitet.ORCID-id: 0000-0001-9499-1535
Umeå universitet.
Umeå universitet.ORCID-id: 0000-0001-7409-5813
2019 (Engelska)Ingår i: Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Abstract [en]

Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains which directly affect human well-being. However, if AI is to improve people’s lives, then people must be able to trust it, by being able to understand what the system is doing and why. Although transparency is often seen as the requirementin this case, realistically it might not always be possible, whereas the need to ensure that the system operates within set moral bounds remains.

In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass-Box’ around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems.

The explicit transformation of abstract moral values into concrete norms brings great benefits interms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value.

Ort, förlag, år, upplaga, sidor
2019.
Nyckelord [en]
artificial intelligence, ethics, verification, safety, transparency
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
datalogi
Identifikatorer
URN: urn:nbn:se:umu:diva-159953OAI: oai:DiVA.org:umu-159953DiVA, id: diva2:1322782
Konferens
28th International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China, August 10-16, 2019.
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horisont 2020, 825619Tillgänglig från: 2019-06-11 Skapad: 2019-06-11 Senast uppdaterad: 2019-10-30Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

ArXiv

Personposter BETA

Aler Tubella, AndreaTheodorou, AndreasDignum, FrankDignum, Virginia

Sök vidare i DiVA

Av författaren/redaktören
Aler Tubella, AndreaTheodorou, AndreasDignum, FrankDignum, Virginia
Av organisationen
Umeå universitet
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

urn-nbn

Altmetricpoäng

urn-nbn
Totalt: 719 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf