umu.sePublications
Change search
Refine search result
1 - 16 of 16
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aler Tubella, Andrea
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Dignum, Virginia
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    The Glass Box Approach: Verifying Contextual Adherence to Values2019Conference paper (Refereed)
    Abstract [en]

    Artificial Intelligence (AI) applications are beingused to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to be deployed safely, then people need to understand how the system is interpreting and whether it is adhering to the relevant moral values. Even though transparency is often seen as the requirement in this case, realistically it might notalways be possible or desirable, whereas the needto ensure that the system operates within set moral bounds remains.

    In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass Box’ around the system by mapping moral values into contextual verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value(s) in a specific context. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems–from deep neural networks to agent-based systems–whereas by making the context explicit we exposethe different perspectives and frameworks that are taken into account when subsuming moral values into specific norms and functionalities. We present a modal logic formalisation of the Glass Box approach which is domain-agnostic, implementable, and expandable.

  • 2.
    Aler Tubella, Andrea
    et al.
    Umeå University.
    Theodorou, Andreas
    Umeå University.
    Dignum, Frank
    Umeå University.
    Dignum, Virginia
    Umeå University.
    Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour2019In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019Conference paper (Other academic)
    Abstract [en]

    Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains which directly affect human well-being. However, if AI is to improve people’s lives, then people must be able to trust it, by being able to understand what the system is doing and why. Although transparency is often seen as the requirementin this case, realistically it might not always be possible, whereas the need to ensure that the system operates within set moral bounds remains.

    In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass-Box’ around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems.

    The explicit transformation of abstract moral values into concrete norms brings great benefits interms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value.

  • 3. Clodic, Aurélie
    et al.
    Vázquez-Salceda, Javier
    Dignum, Frank
    Mascarenhas, Samuel
    Dignum, Virginia
    Augello, Agnese
    Gentile, Manuel
    Alami, Rachid
    On the Pertinence of Social Practices for Social Robotics2018In: International Research Conference Robophilosophy 2018, Vienna, Austria, February 14-17, 2018, IOS Press , 2018, Vol. 311Conference paper (Refereed)
    Abstract [en]

     In the area of consumer robots that need to have rich social interactions with humans, one of the challenges is the complexity of computing the appropriate interactions in a cognitive, social and physical context. We propose a novel approach for social robots based on the concept of Social Practices. By using social practices robots are able to be aware of their own social identities (given by the role in the social practice) and the identities of others and also be able to identify the different social contexts and the appropriate social interactions that go along with those contexts and identities.

  • 4.
    Dignum, Virginia
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Delft Design for Values Institute, Delft University of Technology, Delft, The Netherlands.
    Ethics in artificial intelligence: introduction to the special issue2018In: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 20, no 1, p. 1-3Article in journal (Refereed)
  • 5.
    Dignum, Virginia
    et al.
    Delft University of Technology, The Netherlands.
    Baldoni, Matteo
    Baroglio, Christina
    Caon, Maurizio
    Chatila, Raja
    Dennis, Louise
    Génova, Gonzalo
    Haim, Galit
    Kließ, Malte S
    Lopez-Sanchez, Maite
    Ethics by Design: necessity or curse?2018In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New York: ACM Publications, 2018, p. 60-66Conference paper (Refereed)
  • 6.
    Dignum, Virginia
    et al.
    Delft University of Technology, Netherlands .
    Dignum, Frank
    Vázquez-Salceda, Javier
    Clodic, Aurélie
    Gentile, Manuel
    Mascarenhas, Samuel
    Augello, Agnese
    Design for Values for Social Robot Architectures2018In: Envisioning Robots in Society: Power, Politics, and Public Space / [ed] Mark Coeckelbergh, Janina Loh, Michael Funk, Johanna Seibt, Marco Norskov, IOS Press , 2018, Vol. 311Conference paper (Refereed)
    Abstract [en]

    The integration of social robots in human societies requires that they are capable to take decisions that may affect the lives of people around them. In order to ensure that these robots will behave according to shared ethical principles, an important shift in the design and development of social robots is needed, one where the main goal is improving ethical transparency rather than technical performance, and placing human values at the core of robot designs. In this abstract, we discuss the concept of ethical decision making and how to achieve trust according to the principles of Autonomy, Responsibility and Transparency (ART).

  • 7. Floridi, Luciano
    et al.
    Cowls, Josh
    Beltrametti, Monica
    Chatila, Raja
    Chazerand, Patrice
    Dignum, Virginia
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Delft Design for Values Institute, Delft University of Technology, Delft, The Netherlands.
    Luetge, Christoph
    Madelin, Robert
    Pagallo, Ugo
    Rossi, Francesca
    Schafer, Burkhard
    Valcke, Peggy
    Vayena, Effy
    AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations2018In: Minds and Machines, ISSN 0924-6495, E-ISSN 1572-8641, Vol. 28, no 4, p. 689-707Article in journal (Refereed)
    Abstract [en]

    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a "Good AI Society". We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

  • 8. Larsen, John Bruntse
    et al.
    Dignum, Virginia
    Villadsen, Jørgen
    Dignum, Frank
    Querying Social Practices in Hospital Context2018In: 10th International Conference on Agents and Artificial Intelligence: (Volume 2), 2018Conference paper (Refereed)
    Abstract [en]

    Understanding the social contexts in which actions and interactions take place is of utmost importance for planning one’s goals and activities. People use social practices as means to make sense of their environment, assessing how that context relates to past, common experiences, culture and capabilities. Social practices can therefore simplify deliberation and planning in complex contexts. In the context of patient-centered planning, hospitals seek means to ensure that patients and their families are at the center of decisions and planning of the healthcare processes. This requires on one hand that patients are aware of the practices being in place at the hospital and on the other hand that hospitals have the means to evaluate and adapt current practices to the needs of the patients. In this paper we apply a framework for formalizing social practices of an organization to an emergency department that carries out patient-centered planning. We indicate how such a formalization

  • 9. Mercuur, Rijk
    et al.
    Dignum, Virginia
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Faculty of Technology, Policy and Management, Delft University of Technology, Jaffalaan 5, 2628 BX Delft, The Netherlands.
    Jonker, Catholijn M.
    The Value of Values and Norms in Social Simulation2019In: JASSS: Journal of Artificial Societies and Social Simulation, ISSN 1460-7425, E-ISSN 1460-7425, Vol. 22, no 1, article id 9Article in journal (Refereed)
    Abstract [en]

    Social simulations gain strength when agent behaviour can (1) represent human behaviour and (2) be explained in understandable terms. Agents with values and norms lead to simulation results that meet human needs for explanations, but have not been tested on their ability to reproduce human behaviour. This paper compares empirical data on human behaviour to simulated data on agents with values and norms in a psychological experiment on dividing money: the ultimatum game. We find that our agent model with values and norms produces aggregate behaviour that falls within the 95% confidence interval wherein human behaviour lies more often than other tested agent models. A main insight is that values serve as a static component in agent behaviour, whereas norms serve as a dynamic component.

  • 10. Mercuur, Rijk
    et al.
    Larsen, John Bruntse
    Dignum, Virginia
    Delft University of Technology, The Netherlands.
    Modelling the Social Practices of an Emergency Room to Ensure Staff and Patient Wellbeing2018In: 30th Benelux Conference on Artificial Intelligence,: BNAIC 2018 Preproceedings, 2018, p. 133-147Conference paper (Refereed)
    Abstract [en]

    Understanding the impact of activities is important for emergency rooms (ER) to ensure patient wellbeing and staff satisfaction. An ER is a complex social multi-agent system where staff members should understand the needs of patients, what their colleagues expect of them and how the treatment usually goes about. Decision support tools can contribute to this understanding as they can better manage complex systems and give insight into possible problems using formal methods. Social practices aim to capture this social dimension by focussing on the shared routines in a system, such as diagnosing or treating the patient. This paper uses the Web Ontology Language (OWL) to formalize social practices and then applies it to the ER domain. This results in an ontology that can be used as a basis for decision support tools based on formal reasoning, which we demonstrate by verifying a number of properties for our use case. These results also serve as an example for formalizing the social dimension of multi-agent systems in other domains.

  • 11. Richards, Deborah
    et al.
    Dignum, Virginia
    Delft University of Technology, Delft, The Netherlands.
    Ryan, Malcolm
    Hitchens, Michael
    Incremental Acquisition of Values to Deal with Cybersecurity Ethical Dilemmas2018In: Pacific Rim Knowledge Acquisition Workshop / [ed] Kenichi Yoshida, Maria Lee, Springer, 2018, p. 32-45Conference paper (Refereed)
    Abstract [en]

    Cybersecurity is a growing concern for organisations. Decision-making concerning responses to threats will involve making choices from a number of possible options. The choice made will often depend on the values held by the organisation and/or the individual/s making the decisions. To address the issue of how to capture ethical dilemmas and the values associated with the choices they raise, we propose the use of the Ripple Down Rules (RDR) incremental knowledge acquisition method. We provide an example of how the RDR knowledge can be acquired in the context of a value tree and then translated into a goal-plan tree that can be used by a BDI agent towards supporting the creation of ethical dilemmas that could be used for what-if analysis or training. We also discuss the AORTA framework can be extended with values to allow the BDI cognitive agent to take into consideration organisational norms and policies in its decision-making. 

  • 12. Shults, F LeRon
    et al.
    Wildman, Wesley J
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Dignum, Virginia
    THE ETHICS OF COMPUTER MODELING AND SIMULATION2018In: 2018 Winter Simulation Conference (WSC), 2018, p. 4069-4083Conference paper (Refereed)
    Abstract [en]

    This paper describes a framework for ethical analysis of the practice of computer Modeling & Simulation (M&S). Each of the authors presents a computational model as a case study and offers an ethical analysis by applying the philosophical, scientific, and practical components of the framework. Each author also provides a constructive response to the other case studies. The paper concludes with a summary of guidelines for using this ethical framework when preparing, executing, and analyzing M&S activities. Our hope is that this collaborative engagement will encourage others to join a rich and ongoing conversation about the ethics of M&S.

  • 13. Thelisson, Eva
    et al.
    Sharma, Kshitij
    Salam, Hanan
    Dignum, Virginia
    Delft University of Technology, Delft, Netherland .
    The General Data Protection Regulation: An Opportunity for the HCI Community?2018In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, ACM , 2018, p. 1-8Conference paper (Refereed)
    Abstract [en]

    With HCI, researchers conduct studies in interdisciplinary projects involving massive volume of data, artificial intelligence and machine learning capabilities. Awareness of the responsibility is emerging as a key concern for the HCI community.

    This Community will be impacted by the General Data Protection Regulation (GDPR) [5], that will enter into force on the 25th of May 2018. From that date, each data controller and data processor will face an increase of its legal obligations (in particular its accountability) under certain conditions.

    The GDPR encourages the adoption of Soft Law mechanisms, approved by the national competent authority on data protection, to demonstrate the compliance to the Regulation. Approved Guidelines, Codes of Conducts, Labeling, Marks and Seals dedicated to data protection, as well as certification mechanisms are some of the options proposed by the GDPR.

    There may be discrepancies between the realities of HCI fieldwork and the formal process of obtaining Soft Law approval by Competent Authorities dedicated to data protection. Given these issues, it is important for researchers to reflect on legal and ethical encounters in HCI research as a community.

    This workshop will provide a forum for researchers to share experiences about Soft Law they have put in place to increase Trust, Transparency and Accountability among the shareholders. These discussions will be used to develop a white paper of practical Soft Law mechanisms (certification, labeling, marks, seals...) emerging in HCI research with the aim to demonstrate that the GDPR may be an opportunity for the HCI community.

  • 14. Verdiesen, Ilse
    et al.
    Dignum, Virginia
    Delft University of Technology, Delft, The Netherlands.
    Rahwan, Iyad
    Design Requirements for a Moral Machine for Autonomous Weapons2018In: Computer Safety, Reliability, and Security: SAFECOMP 2018 Workshops / [ed] Barbara Gallina, Amund Skavhaug, Erwin Schoitsch, Friedemann Bitsch, Springer, 2018, p. 494-506Conference paper (Refereed)
    Abstract [en]

    Autonomous Weapon Systems (AWS) are said to become the third revolution in warfare. These systems raise many questions and concerns that demand in-depth research on ethical and moral responsibility. Ethical decisionmaking is studied in related fields like Autonomous Vehicles and Human Oper‐ ated drones, but not yet fully extended to the deployment of AWS and research on moral judgement is lacking. In this paper, we propose design requirements for a Moral Machine (Similar to http://moralmachine.mit.edu/) for Autonomous Weapons to conduct a large-scale study of the moral judgement of people regarding the deployment of this type of weapon. We ran an online survey to get a first impression on the importance of six variables that will be implemented in a proof-of-concept of a Moral Machine for Autonomous Weapons and describe a scenario containing these six variables. The platform will enable large-scale randomized controlled experiments and generate knowledge about people’s feel‐ ings concerning this type of weapons. The next steps of our study include devel‐ opment and testing of the design before the prototype is upscaled to a Massive Online Experiment

  • 15. Verdiesen, Ilse
    et al.
    Dignum, Virginia
    Delft University of Technology, Jaffalaan, Delft.
    Van Den Hoven, Jeroen
    Measuring Moral Acceptability in E-deliberation: A Practical Application of Ethics by Participation2018In: ACM Transactions on Internet Technology, ISSN 1533-5399, E-ISSN 1557-6051, Vol. 18, no 4, p. 43:1-43:20Article in journal (Refereed)
    Abstract [en]

    Current developments in governance and policy setting are challenging traditional top-down models of decision-making. Whereas, on the one hand, citizens are increasingly demanding and expected to participate directly on governance questions, social networking platforms are, on the other hand, increasingly providing podia for the spread of unfounded, extremist and/or harmful ideas. Participatory deliberation is a form of democratic policy making in which deliberation is central to decision-making using both consensus decision-making and majority rule. However, by definition, it will lead to socially accepted results rather than ensuring the moral acceptability of the result. In fact, participation per se offers no guidance regarding the ethics of the decisions taken, nor does it provide means to evaluate alternatives in terms of their moral "quality." This article proposes an open participatory model, Massive Open Online Deliberation (MOOD), that can be used to solve some of the current policy authority deficits. MOOD taps on individual understanding and opinions by harnessing open, participatory, crowd-sourced, and wiki-like methodologies, effectively producing collective judgements regarding complex political and social issues in real time. MOOD oilers the opportunity for people to develop and draft collective judgements on complex issues and crises in real time. MOOD is based on the concept of Ethics by Participation, a formalized and guided process of moral deliberation that extends deliberative democracy platforms to identify morally acceptable outcomes and enhance critical thinking and reflection among participants.

  • 16. Winikoff, Michael
    et al.
    Dignum, Virginia
    Delft University of Technology, Delft, The Netherlands.
    Dignum, Frank
    Why Bad Coffee?: Explaining Agent Plans with Valuings2018In: Computer Safety, Reliability, and Security / [ed] Gallina B., Skavhaug A., Schoitsch E., Bitsch F., Springer International Publishing , 2018, Vol. 11094, p. 521-534Conference paper (Refereed)
    Abstract [en]

    An important issue in deploying an autonomous system is how to enable human users and stakeholders to develop an appropriate level of trust in the system. It has been argued that a crucial mechanism to enable appropriate trust is the ability of a system to explain its behaviour. Obviously, such explanations need to be comprehensible to humans. We argue that it makes sense to build on the results of extensive research in social sciences that explores how humans explain their behaviour. Using similar concepts for explanation is argued to help with comprehensibility, since the concepts are familiar. Following work in the social sciences, we propose the use of a folk-psychological model that utilises beliefs, desires, and “valuings”. We propose a formal framework for constructing explanations of the behaviour of an autonomous system, present an (implemented) algorithm for giving explanations, and present evaluation results.

1 - 16 of 16
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf