Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Theory and validity evidence for a large-scale test for selection to higher education
Umeå University, Faculty of Social Sciences, Department of applied educational science, Departement of Educational Measurement.ORCID iD: 0000-0002-8479-9117
2017 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Validity is a crucial part of all forms of measurement, and especially in instruments that are high-stakes to the test takers. The aim of this thesis was to examine theory and validity evidence for a recently revised large-scale instrument used for selection to higher education in Sweden, the Swedish Scholastic Assessment Test (SweSAT), as well as identify threats to its validity. Previous versions of the SweSAT have been intensely studied but when it was revised in 2011, further research was needed to strengthen the validity arguments for the test. The validity approach suggested in the most recent version of the Standards for education and psychological testing, in which the theoretical basis and five sources of validity evidence are the key aspects of validity, was adopted in this thesis.

The four studies that are presented in this thesis focus on different aspects of the SweSAT, including theory, score reporting, item functioning and linking of test forms. These studies examine validity evidence from four of the five sources of validity: evidence based on test content, response processes, internal structure and consequences of testing.

The results from the thesis as a whole show that there is validity evidence that supports some of the validity arguments for the intended interpretations and uses of SweSAT scores, and that there are potential threats to validity that require further attention. Empirical evidence supports the two-dimensional structure of the construct scholastic proficiency, but the construct requires a more thorough definition in order to better examine validity evidence based on content and consequences for test takers. Section scores provide more information about test takers' strengths and weaknesses than what is already provided by the total score and can therefore be reported, but subtest scores do not provide additional information and should not be reported. All four quantitative subtests, as well as the Swedish reading comprehension subtest, are essentially free of differential item functioning (DIF) but there is moderate DIF that could be bias in two of the four verbal subtests. Finally, the equating procedure, although it appears to be appropriate, needs to be examined further in order to determine whether it is the best practice available or not for the SweSAT.

Some of the results in this thesis are specific to the SweSAT because only SweSAT data was used but the design of the studies and the methods that were applied serve as practical examples of validating a test and are therefore likely useful to different populations of people involved in test development, test use and psychometric research.

Suggestions for further research include: (1) a study to create a more clear and elaborate definition of the construct, scholastic proficiency; (2) a large and empirically focused study of subscore value in the SweSAT using repeat test takers and applying Haberman’s method along with recently proposed effect size measures; (3) a cross-validation DIF-study using more recently administered test forms; (4) a study that examines the causes for the recurring score differences between women and men on the SweSAT; and (5) a study that re-examines the best practice for equating the current version of the SweSAT, using simulated data in addition to empirical data.

Place, publisher, year, edition, pages
Umeå: Umeå universitet , 2017. , p. 51
Series
Academic dissertations at the department of Educational Measurement, ISSN 1652-9650 ; 10
Keywords [en]
SweSAT, validity, theoretical model, score reporting, subscores, DIF, equating, linking
Keywords [sv]
Högskoleprovet, validitet, teoretisk modell, rapportering av provpoäng, ekvivalering, länkning
National Category
Educational Sciences
Research subject
didactics of educational measurement
Identifiers
URN: urn:nbn:se:umu:diva-138492ISBN: 978-91-7601-732-6 (print)OAI: oai:DiVA.org:umu-138492DiVA, id: diva2:1135845
Public defence
2017-09-22, Hörsal 1031, Norra beteendevetarhuset, Umeå, 10:00 (English)
Opponent
Supervisors
Available from: 2017-09-01 Created: 2017-08-24 Last updated: 2018-06-09Bibliographically approved
List of papers
1. From aptitude to proficiency: The theory behind the Swedish Scholastic Assessment Test
Open this publication in new window or tab >>From aptitude to proficiency: The theory behind the Swedish Scholastic Assessment Test
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Validity arguments for tests should include both theory and empirical evidence, but no theoretical framework for the Swedish Scholastic Assessment Test (SweSAT) has yet been suggested. The purpose of this study was to, for the first time, formulate and present theoretical models for the original and current SweSAT versions, using a synthesis of information from reports and scientific studies. The study also follows the development of the SweSAT’s construct scholastic aptitude, which later became scholastic proficiency. The findings were that the model of 1977 was theoretically elaborate but with little empirical support for the construct domains. In contrast the 2011 model had much empirical support but the construct was less precisely defined. Both models share the same purpose of aiming to measure what is required to succeed in higher education. Suggestions for future research include more precisely defining the contents of the SweSAT’s current construct scholastic proficiency

National Category
Educational Sciences
Identifiers
urn:nbn:se:umu:diva-138812 (URN)
Available from: 2017-08-31 Created: 2017-08-31 Last updated: 2018-06-09
2. Methods for Examining the Psychometric Quality of Subscores: A Review and Application
Open this publication in new window or tab >>Methods for Examining the Psychometric Quality of Subscores: A Review and Application
2015 (English)In: Practical Assessment, Research, and Evaluation, E-ISSN 1531-7714, Vol. 20, article id 21Article in journal (Refereed) Published
Abstract [en]

When subscores on a test are reported to the test taker, the appropriateness of reporting them depends on whether they provide useful information above what is provided by the total score. Subscores that fail to do so lack adequate psychometric quality and should not be reported. There are several methods for examining the quality of subscores, and in this study seven such methods, four of which are based on classical test theory and three of which are based on item response theory, were reviewed and applied to empirical data. The data consisted of test takers' scores on four test forms – two administrations of a first version of a college admission test and two administrations of a second version – and the analyses were carried out on the subtest and section levels. The two section scores were found to have adequate psychometric quality with all methods used, whereas the results for subtest scores ranged from almost all scores having adequate psychometric quality to none having adequate psychometric quality. The authors recommend using Haberman's method and the related utility index because of their solid theoretical foundation and because of various issues with the other subscore quality methods.

Keywords
subscores, score reporting, mean squared error, factor analysis, IRT, college admissions tests
National Category
Pedagogy Psychology
Research subject
didactics of educational measurement
Identifiers
urn:nbn:se:umu:diva-112181 (URN)
Available from: 2015-12-03 Created: 2015-12-03 Last updated: 2024-01-16Bibliographically approved
3. Reasons for gender-related differential item functioning in a college admissions test
Open this publication in new window or tab >>Reasons for gender-related differential item functioning in a college admissions test
2018 (English)In: Scandinavian Journal of Educational Research, ISSN 0031-3831, E-ISSN 1470-1170, Vol. 62, no 6, p. 959-970Article in journal (Refereed) Published
Abstract [en]

Gender fairness in testing can be impeded by the presence of differential item functioning (DIF), which potentially causes test bias. In this study, the presence and causes of gender-related DIF were investigated with real data from 800 items answered by 250,000 test takers. DIF was examined using the Mantel-Haenszel and logistic regression procedures. Little DIF was found in the quantitative items and a moderate amount was found in the verbal items. Vocabulary items favored women if sampled from traditionally female domains but generally not vice versa if sampled from male domains. The sentence completion item format in the English reading comprehension subtest favored men regardless of content. The findings, if supported in a cross-validation study, can potentially lead to changes in how vocabulary items are sampled and in the use of the sentence completion format in English reading comprehension, thereby increasing gender fairness in the examined test.

Place, publisher, year, edition, pages
Routledge, 2018
Keywords
DIF, Mantel-Haenszel, logistic regression, SweSAT, fairness
National Category
Educational Sciences
Identifiers
urn:nbn:se:umu:diva-138816 (URN)10.1080/00313831.2017.1402365 (DOI)000445081100010 ()2-s2.0-85035812910 (Scopus ID)
Available from: 2017-08-31 Created: 2017-08-31 Last updated: 2022-03-09Bibliographically approved
4. Equating challenges when revising large-scale tests: A comparison of different frameworks, methods and designs
Open this publication in new window or tab >>Equating challenges when revising large-scale tests: A comparison of different frameworks, methods and designs
(English)Manuscript (preprint) (Other academic)
Abstract [en]

This study compared the performance of kernel and traditional equipercentile observed-score equating methods when linking a revised test to an old version of that test, and when equating two test forms of the revised test. Several equating designs were included for both methods and R, especially the packages equate and kequate, was used to perform the equatings. Evaluation criteria for the equatings were standard error of equating, percent relative error and difference that matters. The results show that kernel equating is superior to traditional equating when linking a revised test to an old test under the single group design. Kernel equating was not found to be preferable over traditional equating when equating the revised test. Although the percent relative error was low for all designs when using kernel equating, many score differences between kernel- and traditional equating were larger than a difference that matters. The recommendation is therefore to continue to equate with the traditional equating method and to further investigate kernel equating as a future alternative.

National Category
Educational Sciences
Identifiers
urn:nbn:se:umu:diva-138818 (URN)
Available from: 2017-08-31 Created: 2017-08-31 Last updated: 2018-06-09

Open Access in DiVA

fulltext(618 kB)2039 downloads
File information
File name FULLTEXT02.pdfFile size 618 kBChecksum SHA-512
6097189085e6e8f560f1cd95062961b7cf64c8427f5ceac408d45fddbbf4b3d9311bc24a5b09741b739d800d6d0cc8c8c3b0310981bfea8e9254ee5e43f0688a
Type fulltextMimetype application/pdf
spikblad(120 kB)84 downloads
File information
File name FULLTEXT03.pdfFile size 120 kBChecksum SHA-512
597e9ccc0546e23c6b75ca945c5b7de46b1e858b52f6607b21e4f4f160886f58a46e8fb29ca5fe28362ad8b267b8fa18e9950050b7b8908da05d36f54c1be53a
Type spikbladMimetype application/pdf
omslag(8112 kB)0 downloads
File information
File name COVER02.jpgFile size 8112 kBChecksum SHA-512
4d6406266393d81b1e40b741bfb45b4cd721d840b96c3b4217da30c0f7099580674f1af296a687a1b727fca8ee7e94a3360db01785953a11ede636cd415720f0
Type coverMimetype image/jpeg

Authority records

Wedman, Jonathan

Search in DiVA

By author/editor
Wedman, Jonathan
By organisation
Departement of Educational Measurement
Educational Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 2123 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 3939 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf