Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Equating challenges when revising large-scale tests: A comparison of different frameworks, methods and designs
Umeå University, Faculty of Social Sciences, Department of applied educational science, Departement of Educational Measurement.
(English)Manuscript (preprint) (Other academic)
Abstract [en]

This study compared the performance of kernel and traditional equipercentile observed-score equating methods when linking a revised test to an old version of that test, and when equating two test forms of the revised test. Several equating designs were included for both methods and R, especially the packages equate and kequate, was used to perform the equatings. Evaluation criteria for the equatings were standard error of equating, percent relative error and difference that matters. The results show that kernel equating is superior to traditional equating when linking a revised test to an old test under the single group design. Kernel equating was not found to be preferable over traditional equating when equating the revised test. Although the percent relative error was low for all designs when using kernel equating, many score differences between kernel- and traditional equating were larger than a difference that matters. The recommendation is therefore to continue to equate with the traditional equating method and to further investigate kernel equating as a future alternative.

National Category
Educational Sciences
Identifiers
URN: urn:nbn:se:umu:diva-138818OAI: oai:DiVA.org:umu-138818DiVA, id: diva2:1137567
Available from: 2017-08-31 Created: 2017-08-31 Last updated: 2018-06-09
In thesis
1. Theory and validity evidence for a large-scale test for selection to higher education
Open this publication in new window or tab >>Theory and validity evidence for a large-scale test for selection to higher education
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Validity is a crucial part of all forms of measurement, and especially in instruments that are high-stakes to the test takers. The aim of this thesis was to examine theory and validity evidence for a recently revised large-scale instrument used for selection to higher education in Sweden, the Swedish Scholastic Assessment Test (SweSAT), as well as identify threats to its validity. Previous versions of the SweSAT have been intensely studied but when it was revised in 2011, further research was needed to strengthen the validity arguments for the test. The validity approach suggested in the most recent version of the Standards for education and psychological testing, in which the theoretical basis and five sources of validity evidence are the key aspects of validity, was adopted in this thesis.

The four studies that are presented in this thesis focus on different aspects of the SweSAT, including theory, score reporting, item functioning and linking of test forms. These studies examine validity evidence from four of the five sources of validity: evidence based on test content, response processes, internal structure and consequences of testing.

The results from the thesis as a whole show that there is validity evidence that supports some of the validity arguments for the intended interpretations and uses of SweSAT scores, and that there are potential threats to validity that require further attention. Empirical evidence supports the two-dimensional structure of the construct scholastic proficiency, but the construct requires a more thorough definition in order to better examine validity evidence based on content and consequences for test takers. Section scores provide more information about test takers' strengths and weaknesses than what is already provided by the total score and can therefore be reported, but subtest scores do not provide additional information and should not be reported. All four quantitative subtests, as well as the Swedish reading comprehension subtest, are essentially free of differential item functioning (DIF) but there is moderate DIF that could be bias in two of the four verbal subtests. Finally, the equating procedure, although it appears to be appropriate, needs to be examined further in order to determine whether it is the best practice available or not for the SweSAT.

Some of the results in this thesis are specific to the SweSAT because only SweSAT data was used but the design of the studies and the methods that were applied serve as practical examples of validating a test and are therefore likely useful to different populations of people involved in test development, test use and psychometric research.

Suggestions for further research include: (1) a study to create a more clear and elaborate definition of the construct, scholastic proficiency; (2) a large and empirically focused study of subscore value in the SweSAT using repeat test takers and applying Haberman’s method along with recently proposed effect size measures; (3) a cross-validation DIF-study using more recently administered test forms; (4) a study that examines the causes for the recurring score differences between women and men on the SweSAT; and (5) a study that re-examines the best practice for equating the current version of the SweSAT, using simulated data in addition to empirical data.

Place, publisher, year, edition, pages
Umeå: Umeå universitet, 2017. p. 51
Series
Academic dissertations at the department of Educational Measurement, ISSN 1652-9650 ; 10
Keywords
SweSAT, validity, theoretical model, score reporting, subscores, DIF, equating, linking, Högskoleprovet, validitet, teoretisk modell, rapportering av provpoäng, ekvivalering, länkning
National Category
Educational Sciences
Research subject
didactics of educational measurement
Identifiers
urn:nbn:se:umu:diva-138492 (URN)978-91-7601-732-6 (ISBN)
Public defence
2017-09-22, Hörsal 1031, Norra beteendevetarhuset, Umeå, 10:00 (English)
Opponent
Supervisors
Available from: 2017-09-01 Created: 2017-08-24 Last updated: 2018-06-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records

Wedman, Jonathan

Search in DiVA

By author/editor
Wedman, Jonathan
By organisation
Departement of Educational Measurement
Educational Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 581 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf