Compendium of Research Instruments for STEM Education, PART I: Teacher Practices, PCK, and Content Knowledge

Addendum added 2013

The original compendium included instruments that were identified through review of the projects funded in the first five cohorts of the National Science Foundation’s Discovery Research K-12 (DR K-12) program. An Addendum has been added to this version of the compendium to include an additional ten instruments that were identified through review of the projects in the sixth DR K 12 cohort, which received initial funding in 2012. The two tables in the Addendum (beginning on p. 49) include similar information about these additional instruments as was presented for the original set of instruments contained in the main document. However, the information from the additional instruments contained in the Addendum is not incorporated in the body of this compendium, which remains substantively unchanged from the first release in August of 2012.

The purpose of this compendium is to provide an overview on the current status of STEM instrumentation commonly used in the U.S and to provide resources for research and evaluation professionals. Part 1 of a two-part series, the goal to provide insight into the measurement tools available to generate efficacy and effectiveness evidence, as well as understand processes relevant to teaching and learning. It is focused on instruments designed to assess teacher practices, pedagogical content knowledge, and content knowledge.

Just over half of the instruments identified in this Compendium have evidence of acceptable or good levels of reliable implementation and scale consistency, and less than a third have associated validity evidence.

Weaknesses were identified in the following key areas.

  • Without the basic information about what is needed to achieve an acceptable level of inter-rater reliability, users of these observation protocols, interview protocols, and scoring rubrics do not have the necessary information to make informed choices about the implementation of these tools in their own work.
  • Information about survey scale coherence, as well as content and construct validity is essential to move the field forward in reaching a community consensus on operational definitions of key outcome variables.
  • Policymakers need our tools to provide predictive, concurrent and discriminant validity evidence so that informed decisions about the efficacy and effectiveness of interventions can be made soundly.

This review indicates that, as a community, we need to provide relevant psychometric information on the tools we develop and use.