Developing Diagnostic Assessments of STEM Learning: Key Decisions and Alternative Approaches

This session will engage participants in discussion of design issues and expertise required to develop diagnostic assessments of science and mathematics learning.

Date/Time
-
Panel

The topic of this session will be the exploration of alternative approaches to designing diagnostic assessments in STEM learning. The discussion will concentrate on the full range of methodological decisions involved in drawing on current knowledge of student thinking on key ideas and transforming it into assessment strategies based on underlying models of how knowledge develops over time.

Designing diagnostic assessments presents a complex problem of bringing together teams of researchers who combine knowledge of student thinking, assessment, measurement, and classroom practice and are committed to providing better support for teachers as they engage in instructional guidance.

Nearly a decade ago, the National Research Council consensus report, Knowing What Students Know (Pellegrino, Chudowsky, & Glaser, 2001), called for a greater integration of knowledge about how students learn in subject-matter domains into assessment design and practice. The report called on assessment developers to consider explicitly how cognitive theories of the development of skill and knowledge in particular domains could inform task design as well as interpretive frameworks for analyzing observations of student performance in assessment tasks. At the same time, the report suggested that cognitive science could benefit from the application of emerging statistical methods for interpreting assessment results, which could be a means to test conjectures about how learning develops.

A number of projects in recent years, including many now funded under the DR K–12 program, have taken up these challenges. This session will engage participants in discussion of projects focused on developing diagnostic assessments of science and mathematics learning. Diagnostic assessments aim to provide evidence of student learning in a form that teachers can use to adjust their instruction to improve learning outcomes in a particular domain. The development of diagnostic assessment systems requires expertise in cognitive science, measurement, curriculum and instruction, and teacher learning. Coordinating this expertise to organize the development process and making design decisions on projects presents teams with numerous complexities and challenges.

The session involves two investigators who are undertaking projects to build a means to support diagnostic guidance to teachers and at the same time, are advisors on each other’s projects. They will begin by describing their projects, stressing commonalities and differences. They will rapidly transition to an articulation of the kinds of design issues that occur in their projects. The aim will be to elicit and synthesize developing expertise in the design of diagnostic assessment systems. The expectation is to generate insights that cut across the domains of science and mathematics, as well as to generate insights that are domain specific. By organizing the session as a combination of roundtable and panel discussion, presenters hope to contribute to the development of an improvement community of projects that can bootstrap and increase the speed of innovation.

The session will be organized into three parts. In the first part, the two presenters will present a brief typology of projects focused on diagnostic assessment, review the Assessment Triangle framework presented in Knowing What Students Know, and share insights related to their two projects. In the second part, they will break up the audience into small roundtables; science and mathematics education researchers will be encouraged to form groups consisting of other experts in their domain. At the roundtables, participants will discuss design issues and insights that can or have arisen for them in (1) specifying models of cognition, (2) designing tasks, (3) selecting and adapting statistical models for interpreting data, and (4) means of validating their approaches. In addition, participants will be asked to share strategies for organizing projects to make the most of the expertise available to the team in as efficient manner as possible. In the third and final section, roundtable participants will share key discussion points with one another.