Click image to see poster.
Abstract
Objectives: Using the lens of four indicators that collectively contribute to the strength and coherence of mathematical arguments (Sampson, Grooms, & Walker, 2011) we examined and compared the strength of mathematical arguments pre-service teachers (PSTs) generate and the strength of their interpretative arguments in which they make sense of, evaluate, and critique student-generated arguments. We also examined the evidence on which PSTs draw as they make pedagogical decisions about using student-generated arguments to orchestrate mathematical argumentation during class discussion. The following research questions guided this work:
RQ 1: How do mathematical arguments K-8 pre-service teachers generate compare to their interpretative arguments in which they analyze mathematical arguments of students?
RQ 2: When asked to select student-generated arguments to facilitate mathematical argumentation in class discussion, what reasons do PSTs use to support their decisions?
Method: This semester long study was conducted with 37 PSTs enrolled in a mathematics course designed for grades 1-8 PSTs. The data consisted of PSTs’ written responses to a series of problems that facilitated thinking about fractions and proportions, and their interpretative analyses of student-generated arguments. Two of the argument analysis tasks included four samples of student-generated arguments and, in addition to analyze student arguments, asked the PSTs to respond to the following prompt: Assume that the four students whose work you just analyzed, are in your class. You have enough class time to invite two students to present and discuss their solution with the entire class. Which two solutions would you select for class discussion and why? Be clear explaining your choices.
To answer RQ #1, guided by Sampson et al.’s (2011) framework, we developed task specific rubrics to score PSTs’ responses. Then, we used the repeated measures ANOVA test to compare changes in PSTs’ performance on argument constructing and analyzing tasks over time. To answer RQ #2 we conducted qualitative analysis of PSTs’ responses to the argument selection prompt, and examined changes in the proportion of PSTs who cited the identified reasons at the beginning and the end of the semester.
Results and Discussion: Our data revealed that PSTs’ competency in analyzing and critiquing mathematical arguments does not go hand in hand with their competency in constructing mathematical arguments. While PSTs’ performance on argument construction and evaluation tasks systematically improved over the semester, the overall group mean for argument construction tasks (M = 0.739, SE = 0.202) was significantly higher compared to that for argument evaluation tasks (M = 0.572, SE = 0.023), F (1, 36) = 55.496, p < 0.000.
Mathematics teacher educators need to place special emphasis on activities in the context of which PSTs can make sense of, and construct arguments about mathematical reasoning of their students.
A qualitative analysis revealed that after the intervention, the PSTs were more likely to strategically build on student-generated arguments. Their motivations for using student-generated argument as a springboard for class discussions are summarized in Table 3.
We will discuss the implications of our work in relationship to preservice teacher education.