umu.sePublikationer
Ändra sökning
Avgränsa sökresultatet
1 - 19 av 19
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Aler Tubella, Andrea
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Dignum, Virginia
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    The Glass Box Approach: Verifying Contextual Adherence to Values2019Konferensbidrag (Refereegranskat)
    Abstract [en]

    Artificial Intelligence (AI) applications are beingused to predict and assess behaviour in multiple domains, such as criminal justice and consumer finance, which directly affect human well-being. However, if AI is to be deployed safely, then people need to understand how the system is interpreting and whether it is adhering to the relevant moral values. Even though transparency is often seen as the requirement in this case, realistically it might notalways be possible or desirable, whereas the needto ensure that the system operates within set moral bounds remains.

    In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass Box’ around the system by mapping moral values into contextual verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value(s) in a specific context. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems–from deep neural networks to agent-based systems–whereas by making the context explicit we exposethe different perspectives and frameworks that are taken into account when subsuming moral values into specific norms and functionalities. We present a modal logic formalisation of the Glass Box approach which is domain-agnostic, implementable, and expandable.

  • 2.
    Aler Tubella, Andrea
    et al.
    Umeå universitet.
    Theodorou, Andreas
    Umeå universitet.
    Dignum, Frank
    Umeå universitet.
    Dignum, Virginia
    Umeå universitet.
    Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour2019Ingår i: Proceedings of the 28th International Joint Conference on Artificial Intelligence, 2019Konferensbidrag (Övrigt vetenskapligt)
    Abstract [en]

    Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains which directly affect human well-being. However, if AI is to improve people’s lives, then people must be able to trust it, by being able to understand what the system is doing and why. Although transparency is often seen as the requirementin this case, realistically it might not always be possible, whereas the need to ensure that the system operates within set moral bounds remains.

    In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass-Box’ around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems.

    The explicit transformation of abstract moral values into concrete norms brings great benefits interms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value.

  • 3. Clodic, Aurélie
    et al.
    Vázquez-Salceda, Javier
    Dignum, Frank
    Mascarenhas, Samuel
    Dignum, Virginia
    Augello, Agnese
    Gentile, Manuel
    Alami, Rachid
    On the Pertinence of Social Practices for Social Robotics2018Ingår i: International Research Conference Robophilosophy 2018, Vienna, Austria, February 14-17, 2018, IOS Press , 2018, Vol. 311Konferensbidrag (Refereegranskat)
    Abstract [en]

     In the area of consumer robots that need to have rich social interactions with humans, one of the challenges is the complexity of computing the appropriate interactions in a cognitive, social and physical context. We propose a novel approach for social robots based on the concept of Social Practices. By using social practices robots are able to be aware of their own social identities (given by the role in the social practice) and the identities of others and also be able to identify the different social contexts and the appropriate social interactions that go along with those contexts and identities.

  • 4.
    Dignum, Virginia
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Delft Design for Values Institute, Delft University of Technology, Delft, The Netherlands.
    Ethics in artificial intelligence: introduction to the special issue2018Ingår i: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 20, nr 1, s. 1-3Artikel i tidskrift (Refereegranskat)
  • 5.
    Dignum, Virginia
    et al.
    Delft University of Technology, The Netherlands.
    Baldoni, Matteo
    Baroglio, Christina
    Caon, Maurizio
    Chatila, Raja
    Dennis, Louise
    Génova, Gonzalo
    Haim, Galit
    Kließ, Malte S
    Lopez-Sanchez, Maite
    Ethics by Design: necessity or curse?2018Ingår i: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New York: ACM Publications, 2018, s. 60-66Konferensbidrag (Refereegranskat)
  • 6.
    Dignum, Virginia
    et al.
    Delft University of Technology, Netherlands .
    Dignum, Frank
    Vázquez-Salceda, Javier
    Clodic, Aurélie
    Gentile, Manuel
    Mascarenhas, Samuel
    Augello, Agnese
    Design for Values for Social Robot Architectures2018Ingår i: Envisioning Robots in Society: Power, Politics, and Public Space / [ed] Mark Coeckelbergh, Janina Loh, Michael Funk, Johanna Seibt, Marco Norskov, IOS Press , 2018, Vol. 311Konferensbidrag (Refereegranskat)
    Abstract [en]

    The integration of social robots in human societies requires that they are capable to take decisions that may affect the lives of people around them. In order to ensure that these robots will behave according to shared ethical principles, an important shift in the design and development of social robots is needed, one where the main goal is improving ethical transparency rather than technical performance, and placing human values at the core of robot designs. In this abstract, we discuss the concept of ethical decision making and how to achieve trust according to the principles of Autonomy, Responsibility and Transparency (ART).

  • 7. Floridi, Luciano
    et al.
    Cowls, Josh
    Beltrametti, Monica
    Chatila, Raja
    Chazerand, Patrice
    Dignum, Virginia
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Delft Design for Values Institute, Delft University of Technology, Delft, The Netherlands.
    Luetge, Christoph
    Madelin, Robert
    Pagallo, Ugo
    Rossi, Francesca
    Schafer, Burkhard
    Valcke, Peggy
    Vayena, Effy
    AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations2018Ingår i: Minds and Machines, ISSN 0924-6495, E-ISSN 1572-8641, Vol. 28, nr 4, s. 689-707Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a "Good AI Society". We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

  • 8. Larsen, John Bruntse
    et al.
    Dignum, Virginia
    Villadsen, Jørgen
    Dignum, Frank
    Querying Social Practices in Hospital Context2018Ingår i: 10th International Conference on Agents and Artificial Intelligence: (Volume 2), 2018Konferensbidrag (Refereegranskat)
    Abstract [en]

    Understanding the social contexts in which actions and interactions take place is of utmost importance for planning one’s goals and activities. People use social practices as means to make sense of their environment, assessing how that context relates to past, common experiences, culture and capabilities. Social practices can therefore simplify deliberation and planning in complex contexts. In the context of patient-centered planning, hospitals seek means to ensure that patients and their families are at the center of decisions and planning of the healthcare processes. This requires on one hand that patients are aware of the practices being in place at the hospital and on the other hand that hospitals have the means to evaluate and adapt current practices to the needs of the patients. In this paper we apply a framework for formalizing social practices of an organization to an emergency department that carries out patient-centered planning. We indicate how such a formalization

  • 9. Mercuur, Rijk
    et al.
    Dignum, Virginia
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Faculty of Technology, Policy and Management, Delft University of Technology, Jaffalaan 5, 2628 BX Delft, The Netherlands.
    Jonker, Catholijn M.
    The Value of Values and Norms in Social Simulation2019Ingår i: JASSS: Journal of Artificial Societies and Social Simulation, ISSN 1460-7425, E-ISSN 1460-7425, Vol. 22, nr 1, artikel-id 9Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Social simulations gain strength when agent behaviour can (1) represent human behaviour and (2) be explained in understandable terms. Agents with values and norms lead to simulation results that meet human needs for explanations, but have not been tested on their ability to reproduce human behaviour. This paper compares empirical data on human behaviour to simulated data on agents with values and norms in a psychological experiment on dividing money: the ultimatum game. We find that our agent model with values and norms produces aggregate behaviour that falls within the 95% confidence interval wherein human behaviour lies more often than other tested agent models. A main insight is that values serve as a static component in agent behaviour, whereas norms serve as a dynamic component.

  • 10. Mercuur, Rijk
    et al.
    Larsen, John Bruntse
    Dignum, Virginia
    Delft University of Technology, The Netherlands.
    Modelling the Social Practices of an Emergency Room to Ensure Staff and Patient Wellbeing2018Ingår i: 30th Benelux Conference on Artificial Intelligence,: BNAIC 2018 Preproceedings, 2018, s. 133-147Konferensbidrag (Refereegranskat)
    Abstract [en]

    Understanding the impact of activities is important for emergency rooms (ER) to ensure patient wellbeing and staff satisfaction. An ER is a complex social multi-agent system where staff members should understand the needs of patients, what their colleagues expect of them and how the treatment usually goes about. Decision support tools can contribute to this understanding as they can better manage complex systems and give insight into possible problems using formal methods. Social practices aim to capture this social dimension by focussing on the shared routines in a system, such as diagnosing or treating the patient. This paper uses the Web Ontology Language (OWL) to formalize social practices and then applies it to the ER domain. This results in an ontology that can be used as a basis for decision support tools based on formal reasoning, which we demonstrate by verifying a number of properties for our use case. These results also serve as an example for formalizing the social dimension of multi-agent systems in other domains.

  • 11. Richards, Deborah
    et al.
    Dignum, Virginia
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. TU Delft, the Netherlands.
    Supporting and challenging learners through pedagogical agents: Addressing ethical issues through designing for values2019Ingår i: British Journal of Educational Technology, ISSN 0007-1013, E-ISSN 1467-8535, Vol. 50, nr 6, s. 2885-2901Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Pedagogical Agents (PAs) that would guide interactions in intelligent learning environments were envisioned two decades ago. These early animated characters had been shown to deliver learning benefits. However, little was understood regarding what aspects were beneficial for learning and what sort of learning PAs were suitable for. This article considers the current and future use of PAs to support and challenge learners from three perspectives. Firstly, we look at PAs from a practical perspective to consider what Intelligent Virtual Agents are, the roles they play in education and beyond and the underlying technologies and theories driving them. Next we take a pedagogical perspective to consider the vision, pedagogical approaches supported and new possible uses of PAs. This leads us to the political perspective to consider the values, ethics and societal impacts of PAs. Drawing all three perspectives together we present a design for values approach to designing ethical and socially responsible PAs.

  • 12. Richards, Deborah
    et al.
    Dignum, Virginia
    Delft University of Technology, Delft, The Netherlands.
    Ryan, Malcolm
    Hitchens, Michael
    Incremental Acquisition of Values to Deal with Cybersecurity Ethical Dilemmas2018Ingår i: Pacific Rim Knowledge Acquisition Workshop / [ed] Kenichi Yoshida, Maria Lee, Springer, 2018, s. 32-45Konferensbidrag (Refereegranskat)
    Abstract [en]

    Cybersecurity is a growing concern for organisations. Decision-making concerning responses to threats will involve making choices from a number of possible options. The choice made will often depend on the values held by the organisation and/or the individual/s making the decisions. To address the issue of how to capture ethical dilemmas and the values associated with the choices they raise, we propose the use of the Ripple Down Rules (RDR) incremental knowledge acquisition method. We provide an example of how the RDR knowledge can be acquired in the context of a value tree and then translated into a goal-plan tree that can be used by a BDI agent towards supporting the creation of ethical dilemmas that could be used for what-if analysis or training. We also discuss the AORTA framework can be extended with values to allow the BDI cognitive agent to take into consideration organisational norms and policies in its decision-making. 

  • 13. Shults, F LeRon
    et al.
    Wildman, Wesley J
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Dignum, Virginia
    THE ETHICS OF COMPUTER MODELING AND SIMULATION2018Ingår i: 2018 Winter Simulation Conference (WSC), 2018, s. 4069-4083Konferensbidrag (Refereegranskat)
    Abstract [en]

    This paper describes a framework for ethical analysis of the practice of computer Modeling & Simulation (M&S). Each of the authors presents a computational model as a case study and offers an ethical analysis by applying the philosophical, scientific, and practical components of the framework. Each author also provides a constructive response to the other case studies. The paper concludes with a summary of guidelines for using this ethical framework when preparing, executing, and analyzing M&S activities. Our hope is that this collaborative engagement will encourage others to join a rich and ongoing conversation about the ethics of M&S.

  • 14. Thelisson, Eva
    et al.
    Sharma, Kshitij
    Salam, Hanan
    Dignum, Virginia
    Delft University of Technology, Delft, Netherland .
    The General Data Protection Regulation: An Opportunity for the HCI Community?2018Ingår i: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, ACM , 2018, s. 1-8Konferensbidrag (Refereegranskat)
    Abstract [en]

    With HCI, researchers conduct studies in interdisciplinary projects involving massive volume of data, artificial intelligence and machine learning capabilities. Awareness of the responsibility is emerging as a key concern for the HCI community.

    This Community will be impacted by the General Data Protection Regulation (GDPR) [5], that will enter into force on the 25th of May 2018. From that date, each data controller and data processor will face an increase of its legal obligations (in particular its accountability) under certain conditions.

    The GDPR encourages the adoption of Soft Law mechanisms, approved by the national competent authority on data protection, to demonstrate the compliance to the Regulation. Approved Guidelines, Codes of Conducts, Labeling, Marks and Seals dedicated to data protection, as well as certification mechanisms are some of the options proposed by the GDPR.

    There may be discrepancies between the realities of HCI fieldwork and the formal process of obtaining Soft Law approval by Competent Authorities dedicated to data protection. Given these issues, it is important for researchers to reflect on legal and ethical encounters in HCI research as a community.

    This workshop will provide a forum for researchers to share experiences about Soft Law they have put in place to increase Trust, Transparency and Accountability among the shareholders. These discussions will be used to develop a white paper of practical Soft Law mechanisms (certification, labeling, marks, seals...) emerging in HCI research with the aim to demonstrate that the GDPR may be an opportunity for the HCI community.

  • 15.
    Theodorou, Andreas
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Dignum, Virginia
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Towards ethical and socio-legal governance in AI2020Ingår i: Nature Machine Intelligence, ISSN 2522-5839Artikel i tidskrift (Refereegranskat)
  • 16. Verdiesen, Ilse
    et al.
    de Sio, Filippo Santoni
    Dignum, Virginia
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Delft University of Technology, Delft, The Netherlands.
    Moral Values Related to Autonomous Weapon Systems: An Empirical Survey that Reveals Common Ground for the Ethical Debate2019Ingår i: IEEE technology & society magazine, ISSN 0278-0097, E-ISSN 1937-416X, Vol. 38, nr 4, s. 34-44Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In the political debate on Autonomous Weapon Systems strong views and opinions are voiced, but empirical research to support these opinions is lacking. Insight into which moral values are related to the deployment of Autonomous Weapon Systems is missing. We describe the empirical results of two studies on moral values regarding Autonomous Weapon Systems that aim to understand the perception of people pertaining to the introduction of Autonomous Weapon Systems. One study consists of a sample of military personnel of the Dutch Ministry of Defense and the second study contains a sample of civilians. The results indicate both groups are more anxious about the deployment of Autonomous Weapon Systems than about the deployment of Human Operated drones, and that they perceive Autonomous Weapon Systems to have less respect for the dignity of human life. The concerns for Autonomous Weapon Systems creating new kinds of psychological and moral harm is very present in the public debate, and this is in our opinion one element that deserves to be carefully considered in future debates on the ethics of the design and deployment of Autonomous Weapon Systems. The results of these studies reveal a common ground regarding the moral values of human dignity and anxiety pertaining the introduction of Autonomous Weapon Systems which could further the ethical debate.

  • 17. Verdiesen, Ilse
    et al.
    Dignum, Virginia
    Delft University of Technology, Delft, The Netherlands.
    Rahwan, Iyad
    Design Requirements for a Moral Machine for Autonomous Weapons2018Ingår i: Computer Safety, Reliability, and Security: SAFECOMP 2018 Workshops / [ed] Barbara Gallina, Amund Skavhaug, Erwin Schoitsch, Friedemann Bitsch, Springer, 2018, s. 494-506Konferensbidrag (Refereegranskat)
    Abstract [en]

    Autonomous Weapon Systems (AWS) are said to become the third revolution in warfare. These systems raise many questions and concerns that demand in-depth research on ethical and moral responsibility. Ethical decisionmaking is studied in related fields like Autonomous Vehicles and Human Oper‐ ated drones, but not yet fully extended to the deployment of AWS and research on moral judgement is lacking. In this paper, we propose design requirements for a Moral Machine (Similar to http://moralmachine.mit.edu/) for Autonomous Weapons to conduct a large-scale study of the moral judgement of people regarding the deployment of this type of weapon. We ran an online survey to get a first impression on the importance of six variables that will be implemented in a proof-of-concept of a Moral Machine for Autonomous Weapons and describe a scenario containing these six variables. The platform will enable large-scale randomized controlled experiments and generate knowledge about people’s feel‐ ings concerning this type of weapons. The next steps of our study include devel‐ opment and testing of the design before the prototype is upscaled to a Massive Online Experiment

  • 18. Verdiesen, Ilse
    et al.
    Dignum, Virginia
    Delft University of Technology, Jaffalaan, Delft.
    Van Den Hoven, Jeroen
    Measuring Moral Acceptability in E-deliberation: A Practical Application of Ethics by Participation2018Ingår i: ACM Transactions on Internet Technology, ISSN 1533-5399, E-ISSN 1557-6051, Vol. 18, nr 4, s. 43:1-43:20Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Current developments in governance and policy setting are challenging traditional top-down models of decision-making. Whereas, on the one hand, citizens are increasingly demanding and expected to participate directly on governance questions, social networking platforms are, on the other hand, increasingly providing podia for the spread of unfounded, extremist and/or harmful ideas. Participatory deliberation is a form of democratic policy making in which deliberation is central to decision-making using both consensus decision-making and majority rule. However, by definition, it will lead to socially accepted results rather than ensuring the moral acceptability of the result. In fact, participation per se offers no guidance regarding the ethics of the decisions taken, nor does it provide means to evaluate alternatives in terms of their moral "quality." This article proposes an open participatory model, Massive Open Online Deliberation (MOOD), that can be used to solve some of the current policy authority deficits. MOOD taps on individual understanding and opinions by harnessing open, participatory, crowd-sourced, and wiki-like methodologies, effectively producing collective judgements regarding complex political and social issues in real time. MOOD oilers the opportunity for people to develop and draft collective judgements on complex issues and crises in real time. MOOD is based on the concept of Ethics by Participation, a formalized and guided process of moral deliberation that extends deliberative democracy platforms to identify morally acceptable outcomes and enhance critical thinking and reflection among participants.

  • 19. Winikoff, Michael
    et al.
    Dignum, Virginia
    Delft University of Technology, Delft, The Netherlands.
    Dignum, Frank
    Why Bad Coffee?: Explaining Agent Plans with Valuings2018Ingår i: Computer Safety, Reliability, and Security / [ed] Gallina B., Skavhaug A., Schoitsch E., Bitsch F., Springer International Publishing , 2018, Vol. 11094, s. 521-534Konferensbidrag (Refereegranskat)
    Abstract [en]

    An important issue in deploying an autonomous system is how to enable human users and stakeholders to develop an appropriate level of trust in the system. It has been argued that a crucial mechanism to enable appropriate trust is the ability of a system to explain its behaviour. Obviously, such explanations need to be comprehensible to humans. We argue that it makes sense to build on the results of extensive research in social sciences that explores how humans explain their behaviour. Using similar concepts for explanation is argued to help with comprehensibility, since the concepts are familiar. Following work in the social sciences, we propose the use of a folk-psychological model that utilises beliefs, desires, and “valuings”. We propose a formal framework for constructing explanations of the behaviour of an autonomous system, present an (implemented) algorithm for giving explanations, and present evaluation results.

1 - 19 av 19
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf