Umeå University's logo

umu.sePublications
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Kodade normer: En kvalitativ och kvantitativ studie om bias i generativ AI
Umeå University, Faculty of Arts, Department of culture and media studies.
Umeå University, Faculty of Arts, Department of culture and media studies.
2025 (Swedish)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesisAlternative title
Coded Norms : A Qualitative and Quantitative Study on Bias in Generative AI (English)
Abstract [en]

Over the past few years, generative AI has become popular for creating images and text. AI technology has become a staple in modern society, and its tools have grown to be of daily use in both personal and professional settings. However, concerns have been raised about how these AI systems might reinforce societal biases and stereotypes. This study, "Kodade normer: En kvalitativ och kvantiativ studie om bias i generativ AI " investigates whether and how generative AI reproduces biased or stereotypical results using three AI tools: Copilot, Stable Diffusion, and AIEASE. 

Combining semiotic analysis and a quantitative analysis, we examine AI-generated images based on five broad and neutral prompts: Successful person, Happy person, Truck driver, Nurse, and Ambitious person. The broad prompts are intended to compel the AI models to make their own choices, allowing us to analyze their representation decisions.

Our findings reveal clear patterns of bias in generative AI as well as a tendency to assign stereotypical norms concerning success, ambition, gender and ethnicity. The results of this study mirrors the pre-existing awareness of generative AI’s problematic reinforcement of societal norms and that they are far from stereotype-free. Future research should explore a broader range of AI tools, examine changes in AI biases over time, and compare visual and text-based generative models to uncover broader patterns of representation.

Place, publisher, year, edition, pages
2025. , p. 57
Keywords [en]
Stereotype, Bias, Quantitative, Qualitative, Generative AI, AI, Stable Diffusion, Copilot, AIEASE
Keywords [sv]
Stereotyp, Bias, Kvantitativ, Kvalitativ, Generativ AI, AI, Stable Diffusion, Copilot, AIEASE
National Category
Media and Communication Studies
Identifiers
URN: urn:nbn:se:umu:diva-238388OAI: oai:DiVA.org:umu-238388DiVA, id: diva2:1956090
Educational program
Programme in Media and Communication Studies: Strategic Communication
Supervisors
Examiners
Available from: 2025-05-07 Created: 2025-05-05 Last updated: 2025-05-07Bibliographically approved

Open Access in DiVA

fulltext(10615 kB)10 downloads
File information
File name FULLTEXT01.pdfFile size 10615 kBChecksum SHA-512
2c5a1278916e1ad8ae8e8788570356cfd9fd08ae2be26c756e76469fc376ff4365750bc8665ac0cf7647fac8d70108abc76cb4053184b7568b314f020a6ffaaa
Type fulltextMimetype application/pdf

By organisation
Department of culture and media studies
Media and Communication Studies

Search outside of DiVA

GoogleGoogle Scholar
Total: 10 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 234 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf