umu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Dignum, Virginia, ProfessorORCID iD iconorcid.org/0000-0001-7409-5813
Publications (10 of 16) Show all publications
Aler Tubella, A., Theodorou, A., Dignum, F. & Dignum, V. (2019). Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence: . Paper presented at 28th International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China, August 10-16, 2019..
Open this publication in new window or tab >>Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour
2019 (English)In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019Conference paper, Published paper (Other academic)
Abstract [en]

Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains which directly affect human well-being. However, if AI is to improve people’s lives, then people must be able to trust it, by being able to understand what the system is doing and why. Although transparency is often seen as the requirementin this case, realistically it might not always be possible, whereas the need to ensure that the system operates within set moral bounds remains.

In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass-Box’ around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems.

The explicit transformation of abstract moral values into concrete norms brings great benefits interms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value.

Keywords
artificial intelligence, ethics, verification, safety, transparency
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-159953 (URN)
Conference
28th International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China, August 10-16, 2019.
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)EU, Horizon 2020, 825619
Available from: 2019-06-11 Created: 2019-06-11 Last updated: 2019-10-30Bibliographically approved
Aler Tubella, A. & Dignum, V. (2019). The Glass Box Approach: Verifying Contextual Adherence to Values. In: : . Paper presented at AISafety 2019, Macao, August 11-12, 2019.
Open this publication in new window or tab >>The Glass Box Approach: Verifying Contextual Adherence to Values
2019 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Artificial Intelligence (AI) applications are beingused to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to be deployed safely, then people need to understand how the system is interpreting and whether it is adhering to the relevant moral values. Even though transparency is often seen as the requirement in this case, realistically it might notalways be possible or desirable, whereas the needto ensure that the system operates within set moral bounds remains.

In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass Box’ around the system by mapping moral values into contextual verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value(s) in a specific context. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems–from deep neural networks to agent-based systems–whereas by making the context explicit we exposethe different perspectives and frameworks that are taken into account when subsuming moral values into specific norms and functionalities. We present a modal logic formalisation of the Glass Box approach which is domain-agnostic, implementable, and expandable.

Keywords
artificial intelligence, safety, verification, ethics
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-160949 (URN)
Conference
AISafety 2019, Macao, August 11-12, 2019
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2019-06-26 Created: 2019-06-26 Last updated: 2019-06-27
Mercuur, R., Dignum, V. & Jonker, C. M. (2019). The Value of Values and Norms in Social Simulation. JASSS: Journal of Artificial Societies and Social Simulation, 22(1), Article ID 9.
Open this publication in new window or tab >>The Value of Values and Norms in Social Simulation
2019 (English)In: JASSS: Journal of Artificial Societies and Social Simulation, ISSN 1460-7425, E-ISSN 1460-7425, Vol. 22, no 1, article id 9Article in journal (Refereed) Published
Abstract [en]

Social simulations gain strength when agent behaviour can (1) represent human behaviour and (2) be explained in understandable terms. Agents with values and norms lead to simulation results that meet human needs for explanations, but have not been tested on their ability to reproduce human behaviour. This paper compares empirical data on human behaviour to simulated data on agents with values and norms in a psychological experiment on dividing money: the ultimatum game. We find that our agent model with values and norms produces aggregate behaviour that falls within the 95% confidence interval wherein human behaviour lies more often than other tested agent models. A main insight is that values serve as a static component in agent behaviour, whereas norms serve as a dynamic component.

Place, publisher, year, edition, pages
J A S S S, 2019
Keywords
Human Values, Norms, Ultimatum Game, Empirical Data, Agent-Based Model
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-156589 (URN)10.18564/jasss.3929 (DOI)000457436100012 ()
Available from: 2019-02-22 Created: 2019-02-22 Last updated: 2019-02-22Bibliographically approved
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., . . . Vayena, E. (2018). AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689-707
Open this publication in new window or tab >>AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
Show others...
2018 (English)In: Minds and Machines, ISSN 0924-6495, E-ISSN 1572-8641, Vol. 28, no 4, p. 689-707Article in journal (Refereed) Published
Abstract [en]

This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a "Good AI Society". We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

Place, publisher, year, edition, pages
Springer, 2018
Keywords
Artificial intelligence, AI4People, Data governance, Digital ethics, Governance, Ethics of AI
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-154897 (URN)10.1007/s11023-018-9482-5 (DOI)000452516200004 ()
Available from: 2019-01-07 Created: 2019-01-07 Last updated: 2019-01-07Bibliographically approved
Dignum, V., Dignum, F., Vázquez-Salceda, J., Clodic, A., Gentile, M., Mascarenhas, S. & Augello, A. (2018). Design for Values for Social Robot Architectures. In: Mark Coeckelbergh, Janina Loh, Michael Funk, Johanna Seibt, Marco Norskov (Ed.), Envisioning Robots in Society: Power, Politics, and Public Space. Paper presented at International Research Conference Robophilosophy 2018, Transor, Vienna, Austria, February 14-17, 2018. IOS Press, 311
Open this publication in new window or tab >>Design for Values for Social Robot Architectures
Show others...
2018 (English)In: Envisioning Robots in Society: Power, Politics, and Public Space / [ed] Mark Coeckelbergh, Janina Loh, Michael Funk, Johanna Seibt, Marco Norskov, IOS Press , 2018, Vol. 311Conference paper, Published paper (Refereed)
Abstract [en]

The integration of social robots in human societies requires that they are capable to take decisions that may affect the lives of people around them. In order to ensure that these robots will behave according to shared ethical principles, an important shift in the design and development of social robots is needed, one where the main goal is improving ethical transparency rather than technical performance, and placing human values at the core of robot designs. In this abstract, we discuss the concept of ethical decision making and how to achieve trust according to the principles of Autonomy, Responsibility and Transparency (ART).

Place, publisher, year, edition, pages
IOS Press, 2018
Series
Frontiers in Artificial Intelligence and Applications, ISSN 0922-6389, E-ISSN 1879-8314 ; 311
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-157830 (URN)10.3233/978-1-61499-931-7-43 (DOI)
Conference
International Research Conference Robophilosophy 2018, Transor, Vienna, Austria, February 14-17, 2018
Note

978-1-61499-930-0 (print), 978-1-61499-931-7 (online)

Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-09Bibliographically approved
Verdiesen, I., Dignum, V. & Rahwan, I. (2018). Design Requirements for a Moral Machine for Autonomous Weapons. In: Barbara Gallina, Amund Skavhaug, Erwin Schoitsch, Friedemann Bitsch (Ed.), Computer Safety, Reliability, and Security: SAFECOMP 2018 Workshops. Paper presented at SAFECOMP, International Conference on Computer Safety, Reliability, and Security, Västerås, Sweden, September 18, 2018 (pp. 494-506). Springer
Open this publication in new window or tab >>Design Requirements for a Moral Machine for Autonomous Weapons
2018 (English)In: Computer Safety, Reliability, and Security: SAFECOMP 2018 Workshops / [ed] Barbara Gallina, Amund Skavhaug, Erwin Schoitsch, Friedemann Bitsch, Springer, 2018, p. 494-506Conference paper, Published paper (Refereed)
Abstract [en]

Autonomous Weapon Systems (AWS) are said to become the third revolution in warfare. These systems raise many questions and concerns that demand in-depth research on ethical and moral responsibility. Ethical decisionmaking is studied in related fields like Autonomous Vehicles and Human Oper‐ ated drones, but not yet fully extended to the deployment of AWS and research on moral judgement is lacking. In this paper, we propose design requirements for a Moral Machine (Similar to http://moralmachine.mit.edu/) for Autonomous Weapons to conduct a large-scale study of the moral judgement of people regarding the deployment of this type of weapon. We ran an online survey to get a first impression on the importance of six variables that will be implemented in a proof-of-concept of a Moral Machine for Autonomous Weapons and describe a scenario containing these six variables. The platform will enable large-scale randomized controlled experiments and generate knowledge about people’s feel‐ ings concerning this type of weapons. The next steps of our study include devel‐ opment and testing of the design before the prototype is upscaled to a Massive Online Experiment

Place, publisher, year, edition, pages
Springer, 2018
Keywords
Autonomous weapons, Ethical decision-making, Moral acceptability, Moral machine
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-157832 (URN)978-3-319-99228-0 (ISBN)978-3-319-99229-7 (ISBN)
Conference
SAFECOMP, International Conference on Computer Safety, Reliability, and Security, Västerås, Sweden, September 18, 2018
Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-09Bibliographically approved
Dignum, V., Baldoni, M., Baroglio, C., Caon, M., Chatila, R., Dennis, L., . . . Lopez-Sanchez, M. (2018). Ethics by Design: necessity or curse?. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society: . Paper presented at AAAI/ACM, Artifical intelligence, ethics, and society, Honolulu, Hawaii, USA, January 27-28, 2019 (pp. 60-66). New York: ACM Publications
Open this publication in new window or tab >>Ethics by Design: necessity or curse?
Show others...
2018 (English)In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New York: ACM Publications, 2018, p. 60-66Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
New York: ACM Publications, 2018
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-157828 (URN)10.1145/3278721.3278745 (DOI)978-1-4503-6012-8 (ISBN)
Conference
AAAI/ACM, Artifical intelligence, ethics, and society, Honolulu, Hawaii, USA, January 27-28, 2019
Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-11Bibliographically approved
Dignum, V. (2018). Ethics in artificial intelligence: introduction to the special issue. Ethics and Information Technology, 20(1), 1-3
Open this publication in new window or tab >>Ethics in artificial intelligence: introduction to the special issue
2018 (English)In: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 20, no 1, p. 1-3Article in journal, Editorial material (Refereed) Published
National Category
Engineering and Technology
Identifiers
urn:nbn:se:umu:diva-157836 (URN)10.1007/s10676-018-9450-z (DOI)000426716300001 ()
Note

 Special Issue

Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-09Bibliographically approved
Richards, D., Dignum, V., Ryan, M. & Hitchens, M. (2018). Incremental Acquisition of Values to Deal with Cybersecurity Ethical Dilemmas. In: Kenichi Yoshida, Maria Lee (Ed.), Pacific Rim Knowledge Acquisition Workshop: . Paper presented at 15th Pacific Rim Knowledge Acquisition Workshop, PKAW 2018 held in conjunction with the 15th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2018; Nanjing; China; August 28-29, 2018 (pp. 32-45). Springer
Open this publication in new window or tab >>Incremental Acquisition of Values to Deal with Cybersecurity Ethical Dilemmas
2018 (English)In: Pacific Rim Knowledge Acquisition Workshop / [ed] Kenichi Yoshida, Maria Lee, Springer, 2018, p. 32-45Conference paper, Published paper (Refereed)
Abstract [en]

Cybersecurity is a growing concern for organisations. Decision-making concerning responses to threats will involve making choices from a number of possible options. The choice made will often depend on the values held by the organisation and/or the individual/s making the decisions. To address the issue of how to capture ethical dilemmas and the values associated with the choices they raise, we propose the use of the Ripple Down Rules (RDR) incremental knowledge acquisition method. We provide an example of how the RDR knowledge can be acquired in the context of a value tree and then translated into a goal-plan tree that can be used by a BDI agent towards supporting the creation of ethical dilemmas that could be used for what-if analysis or training. We also discuss the AORTA framework can be extended with values to allow the BDI cognitive agent to take into consideration organisational norms and policies in its decision-making. 

Place, publisher, year, edition, pages
Springer, 2018
Series
Lecture Notes in Computer Science, ISSN 0302-9743
Keywords
Agents, AORTA, Beliefs desires intentions, Cybersecurity, Ripple Down Rules, Values
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-157833 (URN)10.1007/978-3-319-97289-3_3 (DOI)9783319972886 (ISBN)9783319972893 (ISBN)
Conference
15th Pacific Rim Knowledge Acquisition Workshop, PKAW 2018 held in conjunction with the 15th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2018; Nanjing; China; August 28-29, 2018
Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-09Bibliographically approved
Verdiesen, I., Dignum, V. & Van Den Hoven, J. (2018). Measuring Moral Acceptability in E-deliberation: A Practical Application of Ethics by Participation. ACM Transactions on Internet Technology, 18(4), 43:1-43:20
Open this publication in new window or tab >>Measuring Moral Acceptability in E-deliberation: A Practical Application of Ethics by Participation
2018 (English)In: ACM Transactions on Internet Technology, ISSN 1533-5399, E-ISSN 1557-6051, Vol. 18, no 4, p. 43:1-43:20Article in journal (Refereed) Published
Abstract [en]

Current developments in governance and policy setting are challenging traditional top-down models of decision-making. Whereas, on the one hand, citizens are increasingly demanding and expected to participate directly on governance questions, social networking platforms are, on the other hand, increasingly providing podia for the spread of unfounded, extremist and/or harmful ideas. Participatory deliberation is a form of democratic policy making in which deliberation is central to decision-making using both consensus decision-making and majority rule. However, by definition, it will lead to socially accepted results rather than ensuring the moral acceptability of the result. In fact, participation per se offers no guidance regarding the ethics of the decisions taken, nor does it provide means to evaluate alternatives in terms of their moral "quality." This article proposes an open participatory model, Massive Open Online Deliberation (MOOD), that can be used to solve some of the current policy authority deficits. MOOD taps on individual understanding and opinions by harnessing open, participatory, crowd-sourced, and wiki-like methodologies, effectively producing collective judgements regarding complex political and social issues in real time. MOOD oilers the opportunity for people to develop and draft collective judgements on complex issues and crises in real time. MOOD is based on the concept of Ethics by Participation, a formalized and guided process of moral deliberation that extends deliberative democracy platforms to identify morally acceptable outcomes and enhance critical thinking and reflection among participants.

Place, publisher, year, edition, pages
ACM, 2018
Keywords
Online deliberation, ethics by participation, participatory systems
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-157835 (URN)10.1145/3183324 (DOI)000457135300004 ()
Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-09Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7409-5813

Search in DiVA

Show all publications