New Measurement Paradigms

In June 2010, the New Measurement Paradigms (NMP) work group emerged as an active subset of a larger DR K-12 assessment community. In this spotlight, CADRE invites you to take a closer look at the report that came out of their work together and the people and projects involved. This is one of a series of spotlights that feature projects working in a particular area of STEM education research. If you have suggestions for future topics or are interested in having your project featured, email

NMP Report Cover
The April 2012 New Measurement Paradigms report is designed to serve as a reference point for researchers who are working in projects that are creating e-learning environments in which there is a need to make judgments about students’ levels of knowledge and skills, or for those interested in this but who have not yet delved into these methods.

As the New Measurement Paradigms name implies, NMP work group members feel it is time for a new conception of what constitutes the field of educational measurement. Their work together culminated in a collection of papers representing a snapshot of the variety of measurement methods in use (at the time of writing) across several projects funded through the National Science Foundation's REESE and DR K–12 programs. All of the projects are developing and testing intelligent learning environments that seek to carefully measure and promote student learning. The purpose of this collection of papers is to describe and illustrate the use of several measurement methods employed to achieve this: Knowledge Specification, Item Response Theory, Machine-Learning Methods, and Educational Data Mining.

*Section authors are listed with lead author in italics

Knowledge Specification: Knowledge specification uses a priori analysis to align learning environments and embedded assessment with targeted knowledge.

  • Debbie Denise Reese (Wheeling Jesuit University)
  • Janice Gobert (Worcester Polytechnic Institute)

Item Response Theory (IRT): IRT models can be used to estimate the distance between assessment items, as well as between persons, and express them on a common scale.

  • Douglas H. Clements (University of Buffalo)
  • Julie Sarama (University of Buffalo)
  • Michael Timms (Australian Council for Educational Research)
  • Curtis Tatsuoka (Case Western Reserve)
  • Kikumi Tatsuoka (Columbia University, retired)

Machine Learning: Machine learning approaches take advantage of the ways in which computers can learn to recognize complex patterns and make intelligent decisions based on data.

  • Michael Timms (Australian Council for Educational Research)
  • James Lester (North Carolina State University)
  • Kristy Elizabeth Boyer (North Carolina State University)
  • Eric Wiebe (North Carolina State University)

Educational Data Mining: Educational Data Mining tools and techniques focus primarily on modeling and uncovering patterns in large data sets.

  • Diane Jass Ketelhut (University of Maryland-College Park)
  • Alexander Yates (Temple University),
  • Avirup Sil (Temple University)
  • Michael Timms

The final section of the report looks forward to where research in the field of educational measurement in electronic learning environments is headed and identifies areas that still need further research and development. As a first step in disseminating their work, the group held a symposium at the 2012 AERA Annual Meeting where they provided a brief overview of the methods highlighted in the report, and present posters on authors’ NSF-supported projects implementing the methods outlined in the report.