Students’ use of superficial reasoning seems to be a main reason for learning difficulties in mathematics. It is therefore important to investigate the reasons for this use and the components that may affect students’ mathematical reasoning development. Assessments have been claimed to be a component that significantly may influence students’ learning.
The purpose of the study in Paper 1 was to investigate the kind of mathematical reasoning that is required to successfully solve tasks in the written tests students encounter in their learning environment. This study showed that a majority of the tasks in teacher-made assessment could be solved successfully by using only imitative reasoning. The national tests however required creative mathematically founded reasoning to a much higher extent.
The question about what kind of reasoning the students really use, regardless of what theoretically has been claimed to be required on these tests, still remains. This question is investigated in Paper 2.
Here is also the relation between the theoretically established reasoning requirements, i.e. the kind of reasoning the students have to use in order to successfully solve included tasks, and the reasoning actually used by students studied. The results showed that the students to large extent did apply the same reasoning as were required, which means that the framework and analysis procedure can be valuable tools when developing tests. It also strengthens many of the results throughout this thesis. A consequence of this concordance is that as in the case with national tests with high demands regarding reasoning also resulted in a higher use of such reasoning, i.e. creative mathematically founded reasoning. Paper 2 can thus be seen to have validated the used framework and the analysis procedure for establishing these requirements.
Paper 3 investigates the reasons for why the teacher-made tests emphasises low-quality reasoning found in paper I. In short the study showed that the high degree of tasks solvable by imitative reasoning in teacher-made tests seems explainable by amalgamating the following
factors: (i) Limited awareness of differences in reasoning requirements, (ii) low expectations of students abilities and (iii) the desire to get students passing the tests, which was believed easier when excluding creative reasoning from the tests.
Information about these reasons is decisive for the possibilities of changing this emphasis. Results from this study can also be used heuristically to explain some of the results found in paper 4, concerning those teachers that did not seem to be influenced by the national tests.
There are many suggestions in the literature that high-stake tests affect practice in the classroom. Therefore, the national tests may influence teachers in their development of classroom tests. Findings from paper I suggests that this proposed impact seem to have had a limited effect, at least regarding the kind of reasoning required to solve included tasks. What about other competencies described in the policy documents?
Paper 4 investigates if the Swedish national tests have had such an impact on teacher-made classroom assessment. Results showed that impact in terms of similar distribution of tested competences is very limited. The study however showed the existence of impact from the national tests on teachers test development and how this impact may operate.