Mathematics

Classroom Observation Protocol

Lead Organization(s)
Short Description

The Classroom Argumentation Observation Protocol developed in LAMP measures teachers’ pedagogical practices in terms of teachers providing opportunities for students to engage in the mathematics learning experiences specified in the logic model. The protocol provides quantified scores for the types of claims a teacher uses, the explicitness of claims, the sophistication of the warranting, and the use of warrants and data. Open‑ended questions ask for the extent to which the observed lessons address LLAMA lesson objectives. LAMP established content validity of this protocol through an expert panel. At the onset of the LLAMA project, the protocol was revised. The primary revisions include (a) the inclusion of our recent understanding of generic example arguments (see Yopp and Ely, 2016 and Yopp, Ely, and Johnson‑Leung, 2016) and (b) asking about the percentage of students engaged in the classroom argumentation episode. The protocol was further revised during weekly Principal Investigator meetings and was piloted in Year 1. During Year 2, the research teams participated in an observation training to ensure interrater reliability among observers and maintain a codebook of decision rules pertaining to the coding of the observations. Minor modifications were made to the protocol during this training period. The team watched videos and scored the videos during multiple sessions and modified the rubric and wording accordingly, following those sessions.

 

Argument and Reasoning Assessment

Lead Organization(s)
Year
2021
Short Description

The research team developed the Student Argument and Reasoning Assessment (SARA) to measure students’ abilities to construct viable arguments and critique others’ arguments. The SARA was originally developed and validated in the LAMP pilot study (NSF Award Number: 1317034). Items were developed by reviewing prior research on proof/proving (e.g. Healy & Hoyles, 2000; Knuth, 2002b), state assessments, and feedback from the external advisory board. The pretest assessment has 5 items: 4 items measure the ability to construct viable arguments, and 1 item assesses the ability to critique others’ arguments. Specifically Item 1 was designed to elicit a direct argument. Item 2 was designed to elicit an indirect argument or a direct argument. Item 3 was designed to elicit a counterexample argument, and Item 4 was designed to elicit an exhaustive argument. Item 5 was designed to assess students’ ability to see the generalization in a specific example and recognize that the structure in the example applied to all cases. These items address mathematical content at the Grade 7 level to ensure the Grade 8 students have the mathematical knowledge necessary to adequately complete the assessment as a pretest at the beginning of their Grade 8 year (i.e., this ensures the assessment is measuring argumentation skills and not mathematical content knowledge). The posttest assessment includes the same 5 items as the pretest and 4 additional items that address mathematical content that is taught to Grade 8 students during the school year—at the onset of the school year the students would not have the content knowledge to respond to these items on a pretest.

Argumentation Infographic

Research suggests that if students use viable argumentation in their middle school classes, then they will increase their complex mathematical reasoning and mathematics achievement. This is a 2-page infographic detailing the results from a case study.

Author/Presenter

RMC

Lead Organization(s)
Year
2019
Short Description

Research suggests that if students use viable argumentation in their middle school classes, then they will increase their complex mathematical reasoning and mathematics achievement. This is a 2-page infographic detailing the results from a case study.