We report on the use of bilingual constructed response science assessments in the context of a research and development partnership with secondary school science teachers. Given the power that assessments have in today’s education systems, our project provided a series of workshops for teachers where they explored students’ emergent reform-oriented science meaning-making in our project-designed assessments.
This paper illustrates how the combination of teacher and computer guidance can strengthen collaborative revision and identifies opportunities for teacher guidance in a computer-supported collaborative learning environment. We took advantage of natural language processing tools embedded in an online, collaborative environment to automatically score student responses using human-designed knowledge integration rubrics. We used the automated explanation scores to assign adaptive guidance to the students and to provide real-time information to the teacher on students’ learning.
Test scoring procedures should align with the intended uses and interpretations of test results. In this paper, we examine three test scoring procedures for an operational assessment of early numeracy, the Early Grade Mathematics Assessment (EGMA). Current test specifications call for subscores to be reported for each of the eight subtests on the EGMA. This test scoring procedures has been criticized as being difficult for stakeholders to use and interpret, thereby impacting the overall usefulness of the EGMA for informing decisions.
Dig deeper into classroom artifacts using research-based learning progressions to enhance your analysis and response to student work, even when most students solve a problem correctly.
Ebby, C. B., Hulbert, E. T., and Fletcher, N. (2019). What can we learn from correct answers? Teaching Children Mathematics, 25(6), 346-353.
Ambitious efforts are taking place to implement a new vision for science education in the United States, in both Next Generation Science Standards (NGSS)-adopted states and those states creating their own, often related, standards. Inservice and pre-service teacher educators are involved in supporting teacher shifts in practice toward the new standards. With these efforts, it will be important to document shifts in science instruction toward the goals of NGSS and broader science education reform.
This paper describes HASbot, an automated text scoring and real‐time feedback system designed to support student revision of scientific arguments. Students submit open‐ended text responses to explain how their data support claims and how the limitations of their data affect the uncertainty of their explanations. HASbot automatically scores these text responses and returns the scores with feedback to students. Data were collected from 343 middle‐ and high‐school students taught by nine teachers across seven states in the United States.
Cirillo, M. & Hummer, J. (2019). Addressing misconceptions in secondary geometry proof. Mathematics Teacher, 112(6).
The purpose of this study was to develop and validate a survey of opportunities to participate (OtP) in science that will allow educators and researchers to closely approximate the types of learning opportunities students have in science classrooms. Additionally, we examined whether and how opportunity gaps in science learning may exist across schools with different socioeconomic levels. The OtP in science survey consists of four dimensions that include acquiring foundational knowledge, planning an investigation, conducting an investigation, and using evidence to communicate findings.
Validity-related issues are a growing topic within the mathematics education community. Until recently, validation has been treated as something to gather when convenient or is rarely reported in ways that conform to current standards for assessment development. This theoretically-focused proceeding adds to a burgeoning theoretical argument that validation should be considered a methodology within mathematics education scholarship. We connect to design-science research, which is a well-established framework within mathematics education.
Bostic, J., Matney, G., Sondergeld, T., & Stone, G. (2018, November). Content validity evidence for new problem-solving measures (PSM3, PSM4, and PSM5). In T. Hodges, G. Roy, & A. Tyminski (Eds.), Proceedings for the 40h Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education (pp. 1641). Greenville, SC.