Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Helpful, harmless, honest?: sociotechnical limits of AI alignment and safety through reinforcement learning from human feedback
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-1112-2981
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-9808-2037
Department of Computing Science, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-8722-5661
Show others and affiliations
2025 (English)In: Ethics and Information Technology, ISSN 1388-1957, E-ISSN 1572-8439, Vol. 27, no 2, article id 28Article in journal (Refereed) Published
Abstract [en]

This paper critically evaluates the attempts to align Artificial Intelligence (AI) systems, especially Large Language Models (LLMs), with human values and intentions through Reinforcement Learning from Feedback methods, involving either human feedback (RLHF) or AI feedback (RLAIF). Specifically, we show the shortcomings of the broadly pursued alignment goals of honesty, harmlessness, and helpfulness. Through a multidisciplinary sociotechnical critique, we examine both the theoretical underpinnings and practical implementations of RLHF techniques, revealing significant limitations in their approach to capturing the complexities of human ethics, and contributing to AI safety. We highlight tensions inherent in the goals of RLHF, as captured in the HHH principle (helpful, harmless and honest). In addition, we discuss ethically-relevant issues that tend to be neglected in discussions about alignment and RLHF, among which the trade-offs between user-friendliness and deception, flexibility and interpretability, and system safety. We offer an alternative vision for AI safety and ethics which positions RLHF approaches within a broader context of comprehensive design across institutions, processes and technological systems, and suggest the establishment of AI safety as a sociotechnical discipline that is open to the normative and political dimensions of artificial intelligence.

Place, publisher, year, edition, pages
Springer Nature, 2025. Vol. 27, no 2, article id 28
Keywords [en]
Artifcial intelligence, Large language models, Reinforcement learning, Human feedback, AI ethics, AI safety
National Category
Computer Systems Artificial Intelligence Ethics
Research subject
Computer Science; Ethics
Identifiers
URN: urn:nbn:se:umu:diva-239637DOI: 10.1007/s10676-025-09837-2Scopus ID: 2-s2.0-105007225963OAI: oai:DiVA.org:umu-239637DiVA, id: diva2:1964422
Funder
European Commission, 101120237Available from: 2025-06-05 Created: 2025-06-05 Last updated: 2025-06-17Bibliographically approved

Open Access in DiVA

fulltext(737 kB)30 downloads
File information
File name FULLTEXT01.pdfFile size 737 kBChecksum SHA-512
066c69367b9e14bcc05d260a9d05ff091b06a35ef679babc4237ecfe722bd8a75e2ce0363c34d7cb453b94aaa496f7cc3eaa79a6114827949a3e4bccd989bfca
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Dahlgren Lindström, AdamMethnani, LeilaEricson, PetterCoelho Mollo, Dimitri

Search in DiVA

By author/editor
Dahlgren Lindström, AdamMethnani, LeilaEricson, PetterCoelho Mollo, Dimitri
By organisation
Department of Computing ScienceDepartment of historical, philosophical and religious studies
In the same journal
Ethics and Information Technology
Computer SystemsArtificial IntelligenceEthics

Search outside of DiVA

GoogleGoogle Scholar
Total: 30 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 220 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf