Umeå universitets logga

umu.sePublikationer
Ändra sökning
Avgränsa sökresultatet
1 - 5 av 5
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Campano, Erik
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för informatik.
    Pierre Lévy's kansei philosophy as understood through human-computer interaction theories2022Ingår i: 2022 10th International Conference on Affective Computing and Intelligent Interaction, (ACII), Institute of Electrical and Electronics Engineers (IEEE), 2022Konferensbidrag (Refereegranskat)
    Abstract [en]

    French industrial designer Pierre Lévy has proposed a way to understand the philosophy behind kansei engineering. His account is perhaps the most detailed explanation of kansei philosophy in a langauge other than Japanese. Lévy's proposal draws on the ideas of twentieth century Kyoto School founder Kitarou Nishida-particularly Nishida's interest in phenomenology, and his concepts of action-intuition, pure experience, and basho Five particular elements of Lévy's explanation can be compared to fundamental concepts in theories from the discipline of human-computer interaction. These theories include, but are not limited to, Paul Dourish's embodied interaction, Pierre Rabardel's instrumental genesis, and Susanne BOdker's human-artifact model. Kansei philosophy is thereby characterizable with an entirely new vocabulary and analytical framework, arising from the human computer interaction literature. This new framework gives scholars in both Japanese and non-Japanese sociocultural settings a set of novel conceptual tools to understand kansei philosophy.

  • 2.
    Campano, Erik
    et al.
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för informatik.
    Brännström, Andreas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    An ontology of gradualist machine ethics2023Ingår i: 2023 Asia Conference on Cognitive Engineering and Intelligent Interaction (CEII), IEEE, 2023, s. 88-95Konferensbidrag (Refereegranskat)
    Abstract [en]

    Ethical gradualism is the idea that whether an entity possesses morality is not a yes-or-no question, but rather has answers on a gradual scale. Our paper contributes to the field of machine ethics by replacing the often used, and variously construed, concept of computer "morality"with the more specific concept of "moral relevance". This we define as "the characteristic of having some connection to the moral domain". Our definition requires that an entity's perceived moral relevance can be obtained as an aggregate of multi-axial, continuous-variable moral characteristics such as, but not limited to, patiency, responsibility, and autonomy. These characteristics can furthermore be broken down into concrete sub- (and sub-sub-, etc.) characteristics which are easier to measure in the real world. This gradualist model of perceived moral relevance can then be practically implemented through translation into Web Ontology Language. We depict moral relevance both graphically as a class hierarchy, and in computer code. Our implementation allows computers to recognize perceived moral relevance in other computers. This provides a basic architecture by which computers can learn perceived ethical behavior only by acting with one another. The implementation also makes possible a new kind of experimental moral psychology, in which researchers can compare gradual perceived moral relevance directly between humans and computers.

  • 3.
    Soma, Rebekka
    et al.
    University of Oslo, Department of Informatics, Norway.
    Bratteteig, Tone
    University of Oslo, Department of Informatics, Norway.
    Saplacan, Diana
    University of Oslo, Department of Informatics, Norway.
    Schimmer, Robyn
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för psykologi.
    Campano, Erik
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för informatik.
    Verne, Guri B.
    University of Oslo, Department of Informatics, Norway.
    Strengthening human autonomy in the era of autonomous technology2022Ingår i: Scandinavian Journal of Information Systems, ISSN 0905-0167, E-ISSN 1901-0990, Vol. 34, nr 2, s. 163-198, artikel-id 5Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    ‘Autonomous technologies’ refers to systems that make decisions without explicit human control or interaction. This conceptual paper explores the notion of autonomy by first exploring human autonomy, and then using this understanding to analyze how autonomous technology could or should be modelled. First, we discuss what human autonomy means. We conclude that it is the overall space for action—rather than the degree of control—and the actual choices, or number of choices, that constitutes human autonomy. Based on this, our second discussion leads us to suggest the term datanomous to denote technology that builds on, and is restricted by, its own data when operating autonomously. Our conceptual exploration brings forth a more precise definition of human autonomy and datanomous systems. Finally, we conclude this exploration by suggesting that human autonomy can be strengthened by datanomous technologies, but only if they support the human space for action. It is the purpose of human activity that determines if technology strengthens or weakens human autonomy.

  • 4.
    Zicari, Roberto V.
    et al.
    Frankfurt Big Data Lab, Goethe University Frankfurt, Frankfurt, Germany; Department of Business Management and Analytics, Arcada University of Applied Sciences, Helsinki, Finland; Data Science Graduate School, Seoul National University, Seoul, South Korea.
    Ahmed, Sheraz
    German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany.
    Amann, Julia
    Health Ethics and Policy Lab, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland.
    Braun, Stephan Alexander
    Department of Dermatology, University Clinic Münster, Münster, Germany; Department of Dermatology, Medical Faculty, Heinrich-Heine University, Düsseldorf, Germany.
    Brodersen, John
    Section of General Practice and Research Unit for General Practice, Department of Public Health, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark; Primary Health Care Research Unit, Region Zealand, Denmark.
    Bruneault, Frédérick
    École des médias, Collège André-Laurendeau, Université du Québec à Montréal and Philosophie, QC, Montreal, Canada.
    Brusseau, James
    Philosophy Department, Pace University, NY, New York, United States.
    Campano, Erik
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för informatik.
    Coffee, Megan
    Department of Medicine and Division of Infectious Diseases and Immunology, NYU Grossman School of Medicine, NY, New York, United States.
    Dengel, Andreas
    German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany; Department of Computer Science, TU Kaiserslautern, Kaiserslautern, Germany.
    Düdder, Boris
    Department of Computer Science (DIKU), University of Copenhagen (UCPH), Copenhagen, Denmark.
    Gallucci, Alessio
    Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands.
    Gilbert, Thomas Krendl
    Center for Human-Compatible AI, University of California, CA, Berkeley, United States.
    Gottfrois, Philippe
    Department of Biomedical Engineering, Basel University, Basel, Switzerland.
    Goffi, Emmanuel
    The Global AI Ethics Institute, Paris, France.
    Haase, Christoffer Bjerre
    Section for Health Service Research and Section for General Practice, Department of Public Health, University of Copenhagen, Copenhagen, Denmark; Centre for Research in Assessment and Digital Learning, Deakin University, VIC, Melbourne, Australia.
    Hagendorff, Thilo
    Ethics & Philosophy Lab, University of Tuebingen, Tuebingen, Germany.
    Hickman, Eleanore
    Faculty of Law, University of Cambridge, Cambridge, United Kingdom.
    Hildt, Elisabeth
    Center for the Study of Ethics in the Professions, Illinois Institute of Technology, IL, Chicago, United States.
    Holm, Sune
    Department of Food and Resource Economics, Faculty of Science, University of Copenhagen, Copenhagen, Denmark.
    Kringen, Pedro
    Frankfurt Big Data Lab, Goethe University Frankfurt, Frankfurt, Germany.
    Kühne, Ulrich
    Hautmedizin Bad Soden, Bad Soden, Germany.
    Lucieri, Adriano
    German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany; Department of Computer Science, TU Kaiserslautern, Kaiserslautern, Germany.
    Madai, Vince I.
    Charité Lab for AI in Medicine, Charité Universitätsmedizin Berlin, Berlin, Germany; QUEST Center for Transforming Biomedical Research, Berlin Institute of Health (BIH), Charité Universitätsmedizin Berlin, Berlin, Germany; School of Computing and Digital Technology, Faculty of Computing, Engineering and the Built Environment, Birmingham City University, Birmingham, United Kingdom.
    Moreno-Sánchez, Pedro A.
    School of Healthcare and Social Work Seinäjoki University of Applied Sciences (SeAMK), Seinäjoki, Finland.
    Medlicott, Oriana
    AI Ethics, London, United Kingdom.
    Ozols, Matiss
    Division of Cell Matrix Biology and Regenerative Medicine, The University of Manchester, Manchester, United Kingdom; Human Genetics, Wellcome Sanger Institute, United Kingdom.
    Schnebel, Eberhard
    Frankfurt Big Data Lab, Goethe University Frankfurt, Frankfurt, Germany.
    Spezzatti, Andy
    Industrial Engineering & Operation Research, UC Berkeley, CA, United States.
    Tithi, Jesmin Jahan
    Intel Labs, CA, Santa Clara, United States.
    Umbrello, Steven
    Institute for Ethics and Emerging Technologies, University of Turin, Turin, Italy.
    Vetter, Dennis
    Frankfurt Big Data Lab, Goethe University Frankfurt, Frankfurt, Germany.
    Volland, Holger
    Z-Inspection® Initiative, NY, New York, United States.
    Westerlund, Magnus
    Department of Business Management and Analytics, Arcada University of Applied Sciences, Helsinki, Finland.
    Wurth, Renee
    T. H Chan School of Public Health, Harvard University, MA, Cambridge, United States.
    Co-design of a trustworthy AI system in healthcare: deep learning based skin lesion classifier2021Ingår i: Frontiers in Human Dynamics, E-ISSN 2673-2726 , Vol. 3, artikel-id 688152Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.

    Ladda ner fulltext (pdf)
    fulltext
  • 5.
    Zicari, Roberto V.
    et al.
    Artificial Intelligence, Arcada University of Applied Sciences, Helsinki, Finland; Data Science Graduate School, Seoul National University, Seoul, South Korea.
    Brusseau, James
    Philosophy Department, Pace University, NY, New York, United States.
    Blomberg, Stig Nikolaj
    University of Copenhagen, Copenhagen Emergency Medical Services, Copenhagen, Denmark.
    Christensen, Helle Collatz
    University of Copenhagen, Copenhagen Emergency Medical Services, Copenhagen, Denmark.
    Coffee, Megan
    Department of Medicine and Division of Infectious Diseases and Immunology, NYU Grossman School of Medicine, NY, New York, United States.
    Ganapini, Marianna B.
    Montreal AI Ethics Institute, Canada and Union College, NY, New York, United States.
    Gerke, Sara
    Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, Harvard Law School, CA, Berkeley, United States.
    Gilbert, Thomas Krendl
    Center for Human-Compatible AI, University of California, CA, Berkeley, United States.
    Hickman, Eleanore
    Faculty of Law, University of Cambridge, Cambridge, United Kingdom.
    Hildt, Elisabeth
    Center for the Study of Ethics in the Professions, Illinois Institute of Technology Chicago, IL, Chicago, United States.
    Holm, Sune
    Department of Food and Resource Economics, Faculty of Science University of Copenhagen, Copenhagen, Denmark.
    Kühne, Ulrich
    Hautmedizin, Bad Soden, Germany.
    Madai, Vince I.
    CLAIM - Charité Lab for AI in Medicine, Charité Universitätsmedizin Berlin, Berlin, Germany; QUEST Center for Transforming Biomedical Research, Berlin Institute of Health, Charité Universitätsmedizin Berlin, Berlin, Germany; School of Computing and Digital Technology, Faculty of Computing, Engineering and the Built Environment, Birmingham City University, London, United Kingdom.
    Osika, Walter
    Center for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
    Spezzatti, Andy
    Industrial Engineering and Operation Research, University of California, CA, Berkeley, United States.
    Schnebel, Eberhard
    Frankfurt Big Data Lab, Goethe University, Frankfurt, Germany.
    Tithi, Jesmin Jahan
    Parallel Computing Labs, Intel, CA, Santa Clara, United States.
    Vetter, Dennis
    Frankfurt Big Data Lab, Goethe University, Frankfurt, Germany.
    Westerlund, Magnus
    Artificial Intelligence, Arcada University of Applied Sciences, Helsinki, Finland.
    Wurth, Renee
    Fitbiomics, NY, New York, United States.
    Amann, Julia
    Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Zürich, Switzerland.
    Antun, Vegard
    Department of Mathematics, University of Oslo, Oslo, Norway.
    Beretta, Valentina
    Department of Economics and Management, Università degli studi di Pavia, Pavia, Italy.
    Bruneault, Frédérick
    École des médias, Université du Québec à Montréal and Philosophie, Collège André-Laurendeau, QC, Québec, Canada.
    Campano, Erik
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för informatik.
    Düdder, Boris
    Department of Computer Science (DIKU), University of Copenhagen (UCPH), Copenhagen, Denmark.
    Gallucci, Alessio
    Department of Mathematics and Computer Science Eindhoven University of Technology, Eindhoven, Netherlands.
    Goffi, Emmanuel
    Observatoire Ethique and Intelligence Artificielle de l’Institut Sapiens, Paris, Cachan, France.
    Haase, Christoffer Bjerre
    Section for Health Service Research and Section for General Practice, Department of Public Health, University of Copenhagen, Copenhagen, Denmark.
    Hagendorff, Thilo
    Cluster of Excellence "Machine Learning: New Perspectives for Science", University of Tuebingen, Tuebingen, Germany.
    Kringen, Pedro
    Frankfurt Big Data Lab, Goethe University, Frankfurt, Germany.
    Möslein, Florian
    Institute of the Law and Regulation of Digitalization, Philipps-University Marburg, Philipps, Germany.
    Ottenheimer, Davi
    Inrupt, CA, San Francisco, United States.
    Ozols, Matiss
    University of Manchester and Wellcome Sanger Institute, Cambridge, United Kingdom.
    Palazzani, Laura
    Philosophy of Law, LUMSA University, Rome, Italy.
    Petrin, Martin
    Law Department, Western University, ON, London, Canada; Faculty of Laws, University College London, London, United Kingdom.
    Tafur, Karin
    Law and Ethics) and Legal Tech Entrepreneur, Barcelona, Spain.
    Tørresen, Jim
    Department of Informatics, University of Oslo, Oslo, Norway.
    Volland, Holger
    Head of Community and Communications, Z-Inspection® Initiative, london, United Kingdom.
    Kararigas, Georgios
    Department of Physiology, Faculty of Medicine, University of Iceland, Reykjavik, Iceland.
    On assessing trustworthy AI in healthcare: Machine learning as a supportive tool to recognize cardiac arrest in emergency calls2021Ingår i: Frontiers in Human Dynamics, E-ISSN 2673-2726 , Vol. 3, artikel-id 673104Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.

    Ladda ner fulltext (pdf)
    fulltext
1 - 5 av 5
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf