Umeå universitets logga

umu.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Equating challenges when revising large-scale tests: A comparison of different frameworks, methods and designs
Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för tillämpad utbildningsvetenskap, Beteendevetenskapliga mätningar (BVM).ORCID-id: 0000-0002-8479-9117
(Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

This study compared the performance of kernel and traditional equipercentile observed-score equating methods when linking a revised test to an old version of that test, and when equating two test forms of the revised test. Several equating designs were included for both methods and R, especially the packages equate and kequate, was used to perform the equatings. Evaluation criteria for the equatings were standard error of equating, percent relative error and difference that matters. The results show that kernel equating is superior to traditional equating when linking a revised test to an old test under the single group design. Kernel equating was not found to be preferable over traditional equating when equating the revised test. Although the percent relative error was low for all designs when using kernel equating, many score differences between kernel- and traditional equating were larger than a difference that matters. The recommendation is therefore to continue to equate with the traditional equating method and to further investigate kernel equating as a future alternative.

Nationell ämneskategori
Utbildningsvetenskap
Identifikatorer
URN: urn:nbn:se:umu:diva-138818OAI: oai:DiVA.org:umu-138818DiVA, id: diva2:1137567
Tillgänglig från: 2017-08-31 Skapad: 2017-08-31 Senast uppdaterad: 2024-07-02
Ingår i avhandling
1. Theory and validity evidence for a large-scale test for selection to higher education
Öppna denna publikation i ny flik eller fönster >>Theory and validity evidence for a large-scale test for selection to higher education
2017 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Validity is a crucial part of all forms of measurement, and especially in instruments that are high-stakes to the test takers. The aim of this thesis was to examine theory and validity evidence for a recently revised large-scale instrument used for selection to higher education in Sweden, the Swedish Scholastic Assessment Test (SweSAT), as well as identify threats to its validity. Previous versions of the SweSAT have been intensely studied but when it was revised in 2011, further research was needed to strengthen the validity arguments for the test. The validity approach suggested in the most recent version of the Standards for education and psychological testing, in which the theoretical basis and five sources of validity evidence are the key aspects of validity, was adopted in this thesis.

The four studies that are presented in this thesis focus on different aspects of the SweSAT, including theory, score reporting, item functioning and linking of test forms. These studies examine validity evidence from four of the five sources of validity: evidence based on test content, response processes, internal structure and consequences of testing.

The results from the thesis as a whole show that there is validity evidence that supports some of the validity arguments for the intended interpretations and uses of SweSAT scores, and that there are potential threats to validity that require further attention. Empirical evidence supports the two-dimensional structure of the construct scholastic proficiency, but the construct requires a more thorough definition in order to better examine validity evidence based on content and consequences for test takers. Section scores provide more information about test takers' strengths and weaknesses than what is already provided by the total score and can therefore be reported, but subtest scores do not provide additional information and should not be reported. All four quantitative subtests, as well as the Swedish reading comprehension subtest, are essentially free of differential item functioning (DIF) but there is moderate DIF that could be bias in two of the four verbal subtests. Finally, the equating procedure, although it appears to be appropriate, needs to be examined further in order to determine whether it is the best practice available or not for the SweSAT.

Some of the results in this thesis are specific to the SweSAT because only SweSAT data was used but the design of the studies and the methods that were applied serve as practical examples of validating a test and are therefore likely useful to different populations of people involved in test development, test use and psychometric research.

Suggestions for further research include: (1) a study to create a more clear and elaborate definition of the construct, scholastic proficiency; (2) a large and empirically focused study of subscore value in the SweSAT using repeat test takers and applying Haberman’s method along with recently proposed effect size measures; (3) a cross-validation DIF-study using more recently administered test forms; (4) a study that examines the causes for the recurring score differences between women and men on the SweSAT; and (5) a study that re-examines the best practice for equating the current version of the SweSAT, using simulated data in addition to empirical data.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå universitet, 2017. s. 51
Serie
Academic dissertations at the department of Educational Measurement, ISSN 1652-9650 ; 10
Nyckelord
SweSAT, validity, theoretical model, score reporting, subscores, DIF, equating, linking, Högskoleprovet, validitet, teoretisk modell, rapportering av provpoäng, ekvivalering, länkning
Nationell ämneskategori
Utbildningsvetenskap
Forskningsämne
beteendevetenskapliga mätningar
Identifikatorer
urn:nbn:se:umu:diva-138492 (URN)978-91-7601-732-6 (ISBN)
Disputation
2017-09-22, Hörsal 1031, Norra beteendevetarhuset, Umeå, 10:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2017-09-01 Skapad: 2017-08-24 Senast uppdaterad: 2018-06-09Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Person

Wedman, Jonathan

Sök vidare i DiVA

Av författaren/redaktören
Wedman, Jonathan
Av organisationen
Beteendevetenskapliga mätningar (BVM)
Utbildningsvetenskap

Sök vidare utanför DiVA

GoogleGoogle Scholar

urn-nbn

Altmetricpoäng

urn-nbn
Totalt: 758 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf