Umeå universitets logga

umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Reasons for gender-related differential item functioning in a college admissions test
Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för tillämpad utbildningsvetenskap, Beteendevetenskapliga mätningar (BVM).ORCID-id: 0000-0002-8479-9117
2018 (Engelska)Ingår i: Scandinavian Journal of Educational Research, ISSN 0031-3831, E-ISSN 1470-1170, Vol. 62, nr 6, s. 959-970Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Gender fairness in testing can be impeded by the presence of differential item functioning (DIF), which potentially causes test bias. In this study, the presence and causes of gender-related DIF were investigated with real data from 800 items answered by 250,000 test takers. DIF was examined using the Mantel-Haenszel and logistic regression procedures. Little DIF was found in the quantitative items and a moderate amount was found in the verbal items. Vocabulary items favored women if sampled from traditionally female domains but generally not vice versa if sampled from male domains. The sentence completion item format in the English reading comprehension subtest favored men regardless of content. The findings, if supported in a cross-validation study, can potentially lead to changes in how vocabulary items are sampled and in the use of the sentence completion format in English reading comprehension, thereby increasing gender fairness in the examined test.

Ort, förlag, år, upplaga, sidor
Routledge, 2018. Vol. 62, nr 6, s. 959-970
Nyckelord [en]
DIF, Mantel-Haenszel, logistic regression, SweSAT, fairness
Nationell ämneskategori
Utbildningsvetenskap
Identifikatorer
URN: urn:nbn:se:umu:diva-138816DOI: 10.1080/00313831.2017.1402365ISI: 000445081100010Scopus ID: 2-s2.0-85035812910OAI: oai:DiVA.org:umu-138816DiVA, id: diva2:1137558
Tillgänglig från: 2017-08-31 Skapad: 2017-08-31 Senast uppdaterad: 2024-07-02Bibliografiskt granskad
Ingår i avhandling
1. Theory and validity evidence for a large-scale test for selection to higher education
Öppna denna publikation i ny flik eller fönster >>Theory and validity evidence for a large-scale test for selection to higher education
2017 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Validity is a crucial part of all forms of measurement, and especially in instruments that are high-stakes to the test takers. The aim of this thesis was to examine theory and validity evidence for a recently revised large-scale instrument used for selection to higher education in Sweden, the Swedish Scholastic Assessment Test (SweSAT), as well as identify threats to its validity. Previous versions of the SweSAT have been intensely studied but when it was revised in 2011, further research was needed to strengthen the validity arguments for the test. The validity approach suggested in the most recent version of the Standards for education and psychological testing, in which the theoretical basis and five sources of validity evidence are the key aspects of validity, was adopted in this thesis.

The four studies that are presented in this thesis focus on different aspects of the SweSAT, including theory, score reporting, item functioning and linking of test forms. These studies examine validity evidence from four of the five sources of validity: evidence based on test content, response processes, internal structure and consequences of testing.

The results from the thesis as a whole show that there is validity evidence that supports some of the validity arguments for the intended interpretations and uses of SweSAT scores, and that there are potential threats to validity that require further attention. Empirical evidence supports the two-dimensional structure of the construct scholastic proficiency, but the construct requires a more thorough definition in order to better examine validity evidence based on content and consequences for test takers. Section scores provide more information about test takers' strengths and weaknesses than what is already provided by the total score and can therefore be reported, but subtest scores do not provide additional information and should not be reported. All four quantitative subtests, as well as the Swedish reading comprehension subtest, are essentially free of differential item functioning (DIF) but there is moderate DIF that could be bias in two of the four verbal subtests. Finally, the equating procedure, although it appears to be appropriate, needs to be examined further in order to determine whether it is the best practice available or not for the SweSAT.

Some of the results in this thesis are specific to the SweSAT because only SweSAT data was used but the design of the studies and the methods that were applied serve as practical examples of validating a test and are therefore likely useful to different populations of people involved in test development, test use and psychometric research.

Suggestions for further research include: (1) a study to create a more clear and elaborate definition of the construct, scholastic proficiency; (2) a large and empirically focused study of subscore value in the SweSAT using repeat test takers and applying Haberman’s method along with recently proposed effect size measures; (3) a cross-validation DIF-study using more recently administered test forms; (4) a study that examines the causes for the recurring score differences between women and men on the SweSAT; and (5) a study that re-examines the best practice for equating the current version of the SweSAT, using simulated data in addition to empirical data.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå universitet, 2017. s. 51
Serie
Academic dissertations at the department of Educational Measurement, ISSN 1652-9650 ; 10
Nyckelord
SweSAT, validity, theoretical model, score reporting, subscores, DIF, equating, linking, Högskoleprovet, validitet, teoretisk modell, rapportering av provpoäng, ekvivalering, länkning
Nationell ämneskategori
Utbildningsvetenskap
Forskningsämne
beteendevetenskapliga mätningar
Identifikatorer
urn:nbn:se:umu:diva-138492 (URN)978-91-7601-732-6 (ISBN)
Disputation
2017-09-22, Hörsal 1031, Norra beteendevetarhuset, Umeå, 10:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2017-09-01 Skapad: 2017-08-24 Senast uppdaterad: 2018-06-09Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Wedman, Jonathan

Sök vidare i DiVA

Av författaren/redaktören
Wedman, Jonathan
Av organisationen
Beteendevetenskapliga mätningar (BVM)
I samma tidskrift
Scandinavian Journal of Educational Research
Utbildningsvetenskap

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 860 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf