Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 19) Show all publications
Boman, B. & Laukaityte, I. (2026). The nexus between economic development, academic achievement and democratisation: evidence from PISA 2022 and 2018. Globalisation, Societies and Education
Open this publication in new window or tab >>The nexus between economic development, academic achievement and democratisation: evidence from PISA 2022 and 2018
2026 (English)In: Globalisation, Societies and Education, ISSN 1476-7724, E-ISSN 1476-7732Article in journal (Refereed) Epub ahead of print
Abstract [en]

Democratisation, economic development and academic achievement are significant social dimensions which are potentially interrelated. The current article aims to contribute to a more comprehensive understanding of the relations between these three dimensions by taking advantage of data from the Programme for International Student Assessment (PISA), including the 2018 and 2022 surveys. The regression and mediation models were based on both individual level data (eight countries across diverse regime types) and cross-country level data (N = 77 in 2018, 81 in 2022). The results suggest that there were direct effects between income and democratisation across both surveys and analytical levels. The role of academic achievement is less definitive but does not mediate the relations between income and democratisation, although it predicts country-level income.

Place, publisher, year, edition, pages
Routledge, 2026
Keywords
Democratisation, economic development, economic growth, PISA, academic achievement
National Category
Economics
Identifiers
urn:nbn:se:umu:diva-248553 (URN)10.1080/14767724.2025.2608680 (DOI)001662119100001 ()2-s2.0-105027526988 (Scopus ID)
Available from: 2026-01-14 Created: 2026-01-14 Last updated: 2026-02-08
Wiberg, M. & Laukaityte, I. (2025). Calculating bias in test score equating in a NEAT design. Applied psychological measurement, 49(7), 350-366
Open this publication in new window or tab >>Calculating bias in test score equating in a NEAT design
2025 (English)In: Applied psychological measurement, ISSN 0146-6216, E-ISSN 1552-3497, Vol. 49, no 7, p. 350-366Article in journal (Refereed) Published
Abstract [en]

Test score equating is used to make scores from different test forms comparable, even when groups differ in ability. In practice, the non-equivalent group with anchor test (NEAT) design is commonly used. The overall aim was to compare the amount of bias under different conditions when using either chained equating or frequency estimation with five different criterion functions: the identity function, linear equating, equipercentile, chained equating and frequency estimation. We used real test data from a multiple-choice binary scored college admissions test to illustrate that the choice of criterion function matter. Further, we simulated data in line with the empirical data to examine difference in ability between groups, difference in item difficulty, difference in anchor test form and regular test form length, difference in correlations between anchor test form and regular test forms, and different sample size. The results indicate that how bias is defined heavily affects the conclusions we draw about which equating method is to be preferred in different scenarios. Practical implications of this in standardized tests are given together with recommendations on how to calculate bias when evaluating equating transformations.

Place, publisher, year, edition, pages
Sage Publications, 2025
Keywords
criterion function, frequency estimation, chained equating
National Category
Probability Theory and Statistics
Research subject
Statistics
Identifiers
urn:nbn:se:umu:diva-236964 (URN)10.1177/01466216251330305 (DOI)001450757600001 ()40162326 (PubMedID)2-s2.0-105001869754 (Scopus ID)
Funder
Marianne and Marcus Wallenberg Foundation, 2019.0129
Available from: 2025-03-26 Created: 2025-03-26 Last updated: 2025-09-22Bibliographically approved
Laukaityte, I., Wallin, G. & Wiberg, M. (2025). Combining propensity scores and common items for test score equating. Applied psychological measurement, Article ID 01466216251363240.
Open this publication in new window or tab >>Combining propensity scores and common items for test score equating
2025 (English)In: Applied psychological measurement, ISSN 0146-6216, E-ISSN 1552-3497, article id 01466216251363240Article in journal (Refereed) Epub ahead of print
Abstract [en]

Ensuring that test scores are fair and comparable across different test forms and different test groups is a significant statistical challenge in educational testing. Methods to achieve score comparability, a process known as test score equating, often rely on including common test items or assuming that test taker groups are similar in key characteristics. This study explores a novel approach that combines propensity scores, based on test takers' background covariates, with information from common items using kernel smoothing techniques for binary-scored test items. An empirical analysis using data from a high-stakes college admissions test evaluates the standard errors and differences in adjusted test scores. A simulation study examines the impact of factors such as the number of test takers, the number of common items, and the correlation between covariates and test scores on the method’s performance. The findings demonstrate that integrating propensity scores with common item information reduces standard errors and bias more effectively than using either source alone. This suggests that balancing the groups on the test-takers' covariates enhance the fairness and accuracy of test score comparisons across different groups. The proposed method highlights the benefits of considering all the collected data to improve score comparability.

Place, publisher, year, edition, pages
Sage Publications, 2025
Keywords
academic admission, educational testing, equating, fairness, nonequivalent groups with anchor test design
National Category
Probability Theory and Statistics Psychology (Excluding Applied Psychology)
Identifiers
urn:nbn:se:umu:diva-243387 (URN)10.1177/01466216251363240 (DOI)001539586300001 ()40757034 (PubMedID)2-s2.0-105012775902 (Scopus ID)
Funder
Marianne and Marcus Wallenberg Foundation, 2019.0129
Available from: 2025-08-20 Created: 2025-08-20 Last updated: 2025-08-20
Neagu, T. & Laukaityte, I. (2025). Gender-related differential item functioning in SweSAT verbal subtests: the role of extramural English activities in first and foreign language performance. Frontiers in Education, 10, Article ID 1656734.
Open this publication in new window or tab >>Gender-related differential item functioning in SweSAT verbal subtests: the role of extramural English activities in first and foreign language performance
2025 (English)In: Frontiers in Education, E-ISSN 2504-284X, Vol. 10, article id 1656734Article in journal (Refereed) Published
Abstract [en]

Standardized admission tests such as the Swedish Scholastic Aptitude Test (SweSAT) aim to ensure fairness in higher education selection by assessing verbal and quantitative skills. Previous research on the SweSAT indicates declining scores in the Swedish Reading subtest and improved scores in the English reading subtest. Gender differences are present, with males outperforming females on most SweSAT subtests–males often perform better on the multiple-choice format used in SweSAT, especially in English as a foreign language, while females typically perform better in school. Informal English exposure through Extramural English (EE) activities, such as digital gaming and reading, is associated with higher English proficiency, with males engaging more in gaming. However, EE impact on first-language proficiency alongside gender-related differences in test performance remains unclear. This study investigates how EE activities and item format contribute to gender-related performance differences in the SweSAT verbal subtests. A total of 5,230 SweSAT test-takers completed a questionnaire on their engagement in EE activities, focusing on reading and gaming. The SweSAT verbal items were examined using Mantel-Haenszel (MH) Differential Item Functioning (DIF) analyses to identify gender- and EE-related item biases. Results showed that gamers were more likely to be favored on English reading comprehension items, whereas non-gamers were favored on Swedish subtests. English items displayed DIF favoring frequent readers, whereas low-frequency readers were favored on some Swedish items. Male-favored DIF appeared mainly on English items, and females were favored on Swedish items. No consistent DIF patterns were linked to item format or word class across verbal items.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2025
Keywords
college admission test, gender differences, informal English learning, language proficiency, Mantel-Haenszel, test fairness
National Category
Pedagogy
Identifiers
urn:nbn:se:umu:diva-245723 (URN)10.3389/feduc.2025.1656734 (DOI)001588812600001 ()2-s2.0-105018636501 (Scopus ID)
Available from: 2025-10-23 Created: 2025-10-23 Last updated: 2025-10-23Bibliographically approved
Neagu, T., Eklöf, H., Laukaityte, I. & Wedman, J. (2025). Reading, watching and gaming: exploring the relationships between extramural English activities and academic L2 English reading comprehension in a Swedish university admissions test context. Education Inquiry
Open this publication in new window or tab >>Reading, watching and gaming: exploring the relationships between extramural English activities and academic L2 English reading comprehension in a Swedish university admissions test context
2025 (English)In: Education Inquiry, E-ISSN 2000-4508Article in journal (Refereed) Epub ahead of print
Abstract [en]

This study explores the type and frequency of extramural English (EE) (e.g. reading, speaking and gaming) and relationships between EE activities and performance on a standardised English reading comprehension (ERC) test in a sample of young adults, with a particular focus on potential sex differences and online gaming as an EE activity. Participants consist of 6,079 test-takers of the Swedish Scholastic Aptitude Test (SweSAT), a test used for admission to higher education, which has ERC as one of the subtests and the only SweSAT subtest that assesses English proficiency. The type and frequency of engagement in EE activities were assessed through a self-report questionnaire. Correlation, ANOVA and multiple linear regression analyses were conducted to explore how EE activities influence ERC performance, as well as differences between males and females. Overall, the results show positive correlations and significant relationships between EE activities and ERC ability, especially reading, watching English content and moderate-to-high frequency of gaming. Sex differences in EE exposure and test performance were observed. In summary, results from this large-scale study corroborate previous findings that English language acquisition also takes place outside the formal context and that leisure-time activities may enhance ERC ability. Implications are discussed.

Place, publisher, year, edition, pages
Routledge, 2025
Keywords
English reading comprehension, Extramural English, gaming, sex differences, SweSAT
National Category
Pedagogy
Identifiers
urn:nbn:se:umu:diva-244184 (URN)10.1080/20004508.2025.2556572 (DOI)001566091500001 ()2-s2.0-105015317417 (Scopus ID)
Available from: 2025-09-22 Created: 2025-09-22 Last updated: 2025-09-22
Karimova, K., Laukaityte, I. & Wikström, C. (2025). The cognitive diagnostic model analysis of the relationship between verbal and quantitative skills with background characteristics: evidence from the Swedish Scholastic Assessment test 2023. Journal of advanced academics
Open this publication in new window or tab >>The cognitive diagnostic model analysis of the relationship between verbal and quantitative skills with background characteristics: evidence from the Swedish Scholastic Assessment test 2023
2025 (English)In: Journal of advanced academics, ISSN 1932-202XArticle in journal (Refereed) Epub ahead of print
Abstract [en]

For decades, researchers have sought analytic methods that yield diagnostic information about test-takers while ensuring that high-stakes tests remain free of construct-irrelevant bias. This study applies a general cognitive-diagnostic-model (CDM) framework to analyze the quantitative and verbal skills (QVS) assessed by the Swedish Scholastic Aptitude Test (SweSAT). A three-step latent-class logistic-regression approach was used to investigate the relationship between test-takers' background characteristics and performance in each domain subskill. The analysis was conducted using representative data from 41,451 test-takers from the 2023 administration of the SweSAT, focusing on performance across four quantitative and four verbal subskills, and examining test characteristics. The results showed that the CDM method was appropriate for analyzing QVS, with evidence of measurement invariance across sex, age, and educational levels in the subskills. Additionally, the findings revealed distinct associations between sex, educational level, and age with performance in each QVS subskill. Implications for equitable selection in higher education are discussed.

Place, publisher, year, edition, pages
Sage Publications, 2025
Keywords
quantitative and verbal skills, background characteristics, cognitive diagnosis models, three-step latent analysis, differential item functioning, logistic regression, Swedish Scholastic Aptitude Test
National Category
Pedagogy
Identifiers
urn:nbn:se:umu:diva-247371 (URN)10.1177/1932202X251371339 (DOI)001555851000001 ()2-s2.0-105024359778 (Scopus ID)
Available from: 2025-12-08 Created: 2025-12-08 Last updated: 2025-12-18
Laukaityte, I. & Wiberg, M. (2024). Impacts of differences in group abilities and anchor test features on three non-IRT test equating methods. Practical Assessment, Research, and Evaluation, 29(5), Article ID 5.
Open this publication in new window or tab >>Impacts of differences in group abilities and anchor test features on three non-IRT test equating methods
2024 (English)In: Practical Assessment, Research, and Evaluation, E-ISSN 1531-7714, Vol. 29, no 5, article id 5Article in journal (Refereed) Published
Abstract [en]

The overall aim was to examine effects of differences in group ability and features of the anchor test formon equating bias and the standard error of equating (SEE) using both real and simulated data. Chainedkernel equating, Postratification kernel equating, and Circle-arc equating were studied. A collegeadmissions test with four different anchor test forms administered at three test administrations was used.The simulation study examined the differences in ability of the test groups, and differences in the anchortest form with respect to item difficulty and discrimination. In the empirical study, the equated valuesfrom the three methods only slightly differed. The simulation study indicated that an easier anchor testform and/or an easier regular test form, and anchor items with a wider spread in difficulty, negativelyaffected the SEE and bias. The ability level of groups was also important. Equating with only less or morecapable groups resulted in high SEEs at higher and lower test scores, respectively. The discussion includespractical recommendations to whom an anchor test should be given if there is a choice and how to selectan anchor test form which have equating as primary purpose.

Place, publisher, year, edition, pages
University of Massachusetts Press, 2024
Keywords
NEAT, chained kernel equating, Postratification kernel equating, Circle-arc equating, admission test, high stakes assessment
National Category
Probability Theory and Statistics Educational Sciences
Identifiers
urn:nbn:se:umu:diva-221929 (URN)10.7275/pare.2020 (DOI)2-s2.0-85208327525 (Scopus ID)
Funder
Wallenberg Foundations, 2019.0129
Available from: 2024-03-11 Created: 2024-03-11 Last updated: 2025-02-24Bibliographically approved
Wiberg, M., Laukaityte, I. & Rolfsman, E. (2024). The association between attitudes towards mathematics, students' background and TIMSS mathematics achievement. European Journal of Mathematics and Science Education, 5(1), 13-26
Open this publication in new window or tab >>The association between attitudes towards mathematics, students' background and TIMSS mathematics achievement
2024 (English)In: European Journal of Mathematics and Science Education, ISSN 2694-2003, Vol. 5, no 1, p. 13-26Article in journal (Refereed) Published
Abstract [en]

The overall aim of this study is to examine the association between Swedish students’ attitudes towards mathematics, mathematics achievement as measured by the Trends in Mathematics and Science Study (TIMSS), socioeconomic status (SES), and educational background variables. A further aim is to investigate whether students’ attitudes towards mathematics have a mediating role between their mathematics achievement and their background. Several indicators of students’ SES and background, taken from both the TIMSS 2015 database and from Swedish official registers, were used. The overall results show that there were differences in attitudes towards mathematics in relation to the different SES and educational background measures. There are also associations between students’ SES and both TIMSS mathematics achievement and their attitudes towards mathematics. The students’ attitudes towards mathematics only had a small mediation role between the students’ backgrounds and TIMSS mathematics achievement. Finally, although the mediation models had a better fit when including other information, the mediation effect was lower. Practical implications of the obtained results are discussed.

Place, publisher, year, edition, pages
Eurasian Society of Educational Research, 2024
Keywords
Mediation analysis, national test results, school grades, SEM, SES
National Category
Probability Theory and Statistics Educational Sciences
Identifiers
urn:nbn:se:umu:diva-222929 (URN)10.12973/ejmse.5.1.13 (DOI)
Funder
Swedish Research Council, 2015-02160
Available from: 2024-04-03 Created: 2024-04-03 Last updated: 2024-04-04Bibliographically approved
Laukaityte, I., Rolfsman, E. & Wiberg, M. (2024). TIMSS vs. PISA: what can they tell us about student success?—a comparison of Swedish and Norwegian TIMSS and PISA 2015 results with a focus on school factors. Frontiers in Education, 9, Article ID 1323687.
Open this publication in new window or tab >>TIMSS vs. PISA: what can they tell us about student success?—a comparison of Swedish and Norwegian TIMSS and PISA 2015 results with a focus on school factors
2024 (English)In: Frontiers in Education, E-ISSN 2504-284X, Vol. 9, article id 1323687Article in journal (Refereed) Published
Abstract [en]

This paper explores the measurement capabilities of the Trends in International Mathematics and Science Study (TIMSS) and the Program for International Student Assessment (PISA) in assessing school factors that influence student performance. We specifically focus on the 2015 assessments of the science performance of eighth graders in Sweden and Norway. This was the latest year when the two assessments were conducted in the same year and science was the major subject area in the PISA assessment, which was essential for maximizing the assessments’ comparability. Using multilevel models, the study identifies common and unique factors across the assessments and investigates the factors that influence student performance at different proficiency levels. The findings highlight the importance of school-level factors, which are significant in both assessments. Moreover, both assessments provide information on overlapping sets of factors that have varying influence on the performance of students with different proficiency levels. Overall, there are limited common factors between TIMSS and PISA. School factors vary between low-performing and high-performing schools, with differing significance in Norway and Sweden. The results indicate that TIMSS and PISA assessments offer complementary information, particularly for low-performing schools. Our findings suggest that different school types may benefit or suffer from distinct school factors. The findings are relevant for both educational professionals and policy-makers.

Place, publisher, year, edition, pages
Frontiers Media S.A., 2024
Keywords
multilevel models, student success, school-factors, TIMSS, PISA
National Category
Educational Sciences
Identifiers
urn:nbn:se:umu:diva-221346 (URN)10.3389/feduc.2024.1323687 (DOI)001174910400001 ()2-s2.0-85186547719 (Scopus ID)
Funder
Swedish Research Council, 2015-02160
Available from: 2024-02-21 Created: 2024-02-21 Last updated: 2024-04-03Bibliographically approved
Wikström, C. & Laukaityte, I. (2021). Högskoleprovet våren och hösten 2019: Provtagargruppens sammansättning och resultat. Umeå: Umeå universitet
Open this publication in new window or tab >>Högskoleprovet våren och hösten 2019: Provtagargruppens sammansättning och resultat
2021 (Swedish)Report (Other academic)
Abstract [sv]

Högskoleprovet är ett urvalsinstrument i antagningen till universitets- och högskolestudier i Sverige. Tanken med provet är att det ska fungera som ett alternativ för den som är behörig till högre utbildning men som inte har tillräckligt konkurrenskraftiga gymnasiebetyg, och därigenom fungera somen andra chans till högre utbildning när det är konkurrens om studieplatserna. Provet är frivilligt och öppet för alla, och det finns ingen gräns för hur många gånger provet kan tas. Ett provresultat är giltigt i fem åroch om en person har tagit provet flera gånger räknas det bästa resultatet. Provets popularitet varierar, och påverkas av vad som händer inom gymnasieskolan, hur det ser ut på arbetsmarknaden och söktrycket till högre utbildning. Detta gör att antal provtagare och vilka som tar provet skiljer sig åt mellan provtillfällen. Syftet med denna rapport är att beskriva provtagargrupperna våren och hösten 2019 med avseende på sammansättning och resultat. Resultaten presenteras för provtagarna, även uppdelat utifrån kön, ålder och utbildning. I rapporten beskrivs också normeringen av provresultaten och utfallet av normeringen, samt i vilken mån provtagare väljer att ta provet flera gånger och presterar vid upprepade tillfällen.

Abstract [en]

The SweSAT, or Swedish Scholastic Assessment Test (in Swedish:Högskoleprovet) is a test used in admissions to higher education in Sweden. The test serves as an alternative to high school GPA to give students who donot have a competitive high school GPA, or simply want to maximize their possibility to compete for attractive study positions, a second chance. The testis optional, open for all and can be taken unlimited number of times. This means that the number of test takers and composition of test takers varies between administrations; the popularity of the test to some degree depends on external circumstances such as the situation on the job market, school reforms, etc, that will affect the interest in higher education studies. The aim of this report is to describe the population of test-takers in the spring and autumn of 2019 when it comes to composition and test scores. Results are presented for test-takers of different gender, age and previous education. Additionally, the norming and equating procedure and the outcome of the standardization of the test is described. Test taker performance and repeated test taking is also examined and discussed. 

Place, publisher, year, edition, pages
Umeå: Umeå universitet, 2021. p. 36
Series
BVM / Institutionen för beteendevetenskapliga mätningar, Umeå universitet, ISSN 1652-7313 ; 69
Keywords
Högskoleprovet, SweSAT, prov, antagning, urval
National Category
Pedagogical Work Pedagogy
Research subject
didactics of educational measurement
Identifiers
urn:nbn:se:umu:diva-185414 (URN)
Available from: 2021-06-29 Created: 2021-06-29 Last updated: 2024-07-02Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7282-5384

Search in DiVA

Show all publications