Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
We don’t talk about that: case studies on intersectional analysis of social bias in large language models
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Social Sciences, Umeå Centre for Gender Studies (UCGS). Linköping University.ORCID iD: 0000-0003-0278-9757
Centre for Gender Research, Uppsala University.ORCID iD: 0000-0002-4954-4397
Umeå University, Faculty of Science and Technology, Department of Computing Science. (Foundations of Language Processing)ORCID iD: 0000-0002-4696-9787
2024 (English)In: Proceedings of the 5th workshop on gender bias in natural language processing (GeBNLP) / [ed] Agnieszka Faleńska; Christine Basta; Marta Costa-jussà; Seraphina Goldfarb-Tarrant; Debora Nozza, Association for Computational Linguistics, 2024, p. 33-44Conference paper, Published paper (Refereed)
Abstract [en]

Despite concerns that Large Language Models (LLMs) are vectors for reproducing and ampli- fying social biases such as sexism, transpho- bia, islamophobia, and racism, there is a lack of work qualitatively analyzing how such pat- terns of bias are generated by LLMs. We use mixed-methods approaches and apply a femi- nist, intersectional lens to the problem across two language domains, Swedish and English, by generating narrative texts using LLMs. We find that hegemonic norms are consistently re- produced; dominant identities are often treated as ‘default’; and discussion of identity itself may be considered ‘inappropriate’ by the safety features applied to some LLMs. Due to the dif- fering behaviors of models, depending both on their design and the language they are trained on, we observe that strategies of identifying “bias” must be adapted to individual models and their socio-cultural contexts.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2024. p. 33-44
National Category
Language Technology (Computational Linguistics)
Research subject
computational linguistics
Identifiers
URN: urn:nbn:se:umu:diva-228891Scopus ID: 2-s2.0-85204398108ISBN: 979-8-89176-137-7 (electronic)OAI: oai:DiVA.org:umu-228891DiVA, id: diva2:1893187
Conference
Workshop on Gender Bias in Natural Language Processing (GeBNLP), Bangkok, Thailand, 16th August, 2024.
Available from: 2024-08-29 Created: 2024-08-29 Last updated: 2024-10-07Bibliographically approved

Open Access in DiVA

fulltext(395 kB)19 downloads
File information
File name FULLTEXT01.pdfFile size 395 kBChecksum SHA-512
d7633920487b5e21666e2d3fc40ab70e87094bd7db5961527ec37f1ab3681971eb86cdb1e39ce56d123a4354fecd409f8f4239744c7b95da4f36161764d6a8ae
Type fulltextMimetype application/pdf

Other links

ScopusAbstractConference proceedings

Authority records

Devinney, HannahBjörklund, Henrik

Search in DiVA

By author/editor
Devinney, HannahBjörklund, JennyBjörklund, Henrik
By organisation
Department of Computing ScienceUmeå Centre for Gender Studies (UCGS)
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar
Total: 19 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 161 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf