umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Kernel equating with covariates
Umeå University, Faculty of Social Sciences, Department of Statistics.
(English)Manuscript (preprint) (Other academic)
Abstract [en]

To equate two forms of a test we need to collect data in such a way that the link between the scales of the two test froms can be esitmated. The traditional approach is to use common examinees and/or common items. In this paper we explore the idea of using variables correlated with the test scores (e.g., school grades, education) as a substitute for common items in a non-equivalent groups design. This is done in the framework of Kernel Equating, and with an extension of the method developed for post-stratification equating  (PSE) in the non-equivalent groups with anchor test (NEAT) design. Data from two administrations of the data sufficiency subtest of the Swedish Scholastic Assessment Test (SweSAT), fall 1996 (96B) and spring 1997 (97A), are used to illustrate the use of the method. 

Keyword [en]
kernel equating, covariates, kernel smoothing, equipercentile equating, log-linear models, non-equivalent groups design
National Category
Probability Theory and Statistics
Research subject
Statistics
Identifiers
URN: urn:nbn:se:umu:diva-32791OAI: oai:DiVA.org:umu-32791DiVA: diva2:305803
Available from: 2010-03-26 Created: 2010-03-25 Last updated: 2010-03-29Bibliographically approved
In thesis
1. Observed score equating with covariates
Open this publication in new window or tab >>Observed score equating with covariates
2010 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In test score equating the focus is on the problem of finding the relationship between the scales of different test forms. This can be done only if data are collected in such a way that the effect of differences in ability between groups taking different test forms can be separated from the effect of differences in test form difficulty. In standard equating procedures this problem has been solved by using common examinees or common items. With common examinees, as in the equivalent groups design, the single group design, and the counterbalanced design, the examinees taking the test forms are either exactly the same, i.e., each examinee takes both test forms, or random samples from the same population. Common items (anchor items) are usually used when the samples taking the different test forms are assumed to come from different populations.

The thesis consists of four papers and the main theme in three of these papers is the use of covariates, i.e., background variables correlated with the test scores, in observed score equating. We show how covariates can be used to adjust for systematic differences between samples in a non-equivalent groups design when there are no anchor items. We also show how covariates can be used to decrease the equating error in an equivalent groups design or in a non-equivalent groups design.

The first paper, Paper I, is the only paper where the focus is on something else than the incorporation of covariates in equating. The paper is an introduction to test score equating, and the author's thoughts on the foundation of test score equating. There are a number of different definitions of test score equating in the literature. Some of these definitions are presented and the similarities and differences between them are discussed. An attempt is also made to clarify the connection between the definitions and the most commonly used equating functions.

In Paper II a model is proposed for observed score linear equating with background variables. The idea presented in the paper is to adjust for systematic differences in ability between groups in a non-equivalent groups design by using information from background variables correlated with the observed test scores. It is assumed that conditional on the background variables the two samples can be seen as random samples from the same population. The background variables are used to explain the systematic differences in ability between the populations. The proposed model consists of a linear regression model connecting the observed scores with the background variables and a linear equating function connecting observed scores on one test forms to observed scores on the other test form. Maximum likelihood estimators of the model parameters are derived, using an assumption of normally distributed test scores, and data from two administrations of the Swedish Scholastic Assessment Test are used to illustrate the use of the model.

In Paper III we use the model presented in Paper II with two different data collection designs: the non-equivalent groups design (with and without anchor items) and the equivalent groups design. Simulated data are used to examine the effect - in terms of bias, variance and mean squared error - on the estimators, of including covariates. With the equivalent groups design the results show that using covariates can increase the accuracy of the equating. With the non-equivalent groups design the results show that using an anchor test together with covariates is the most efficient way of reducing the mean squared error of the estimators. Furthermore, with no anchor test, the background variables can be used to adjust for the systematic differences between the populations and produce unbiased estimators of the equating relationship, provided that the “right” variables are used, i.e., the variables explaining those differences.

In Paper IV we explore the idea of using covariates as a substitute for an anchor test with a non-equivalent groups design in the framework of Kernel Equating. Kernel Equating can be seen as a method including five different steps: presmoothing, estimation of score probabilities, continuization, equating, and calculating the standard error of equating. For each of these steps we give the theoretical results when observations on covariates are used as a substitute for scores on an anchor test. It is shown that we can use the method developed for Post-Stratification Equating in the non-equivalent groups with anchor test design, but with observations on the covariates instead of scores on an anchor test. The method is illustrated using data from the Swedish Scholastic Assessment Test.

Place, publisher, year, edition, pages
Umeå: Department of Statistics, Umeå university, 2010. 24 p.
Series
Statistical studies, ISSN 1100-8989 ; 41
Keyword
Equating, observed score equating, true scores, item response theory, linear equating function, equipercentile equating, kernel equating, covariates, linear regression, mean squared error
National Category
Probability Theory and Statistics
Research subject
Statistics
Identifiers
urn:nbn:se:umu:diva-32853 (URN)ISBN 978-91-7264-977-4 (ISBN)
Public defence
2010-04-23, Hörsal D, Samhällsvetarhuset, Umeå universitet, 90187 Umeå, Umeå, 10:15 (Swedish)
Opponent
Supervisors
Available from: 2010-03-30 Created: 2010-03-29 Last updated: 2010-03-30Bibliographically approved

Open Access in DiVA

No full text

Authority records BETA

Bränberg, Kenny

Search in DiVA

By author/editor
Bränberg, Kenny
By organisation
Department of Statistics
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 118 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf