Presenters discuss creating and evaluating a substantial revision of an existing assessment of early mathematics using emerging, multidimensional, cognitive and psychometric, theoretical models.
Advancing understanding of how early learning of mathematics progress is dependent on the development of good measures. Presenters of this session produced the “Research-based Early Mathematics Assessment” (REMA), and now better understand its limitations. They are now creating and evaluating a substantial revision of the REMA using emerging multidimensional theoretical models, both cognitive and psychometric. The development of this instrument will lead to substantive advances in the fields of mathematics education research, cognitive psychology, and psychometric theory.
There are two major objectives. First, presenters are producing a cognitively diagnostic adaptive assessment that will yield more useful and detailed information about students’ knowledge of mathematics than previously possible. This instrument will produce data on (a) students’ level of thinking along multiple empirically validated learning trajectories with detailed individual cognitive profiles and (b) individual cognitive data that can be aggregated at the classroom, school, district and (even) state levels, therefore providing specific information about teaching and curricula. The adaptive assessment will be a boon to researchers, teachers, and policymakers who seek large-sample, quantitative indicators of students’ learning.
Second, this advance allows them to subject their learning trajectories to close cognitive diagnosis using cutting-edge psychometric approaches. That is, the NSF has funded a number of researchers to create such trajectories, but few have proposed rigorous models for validating these progressions. We and others have tested them via the Rasch model. However, these models force an artificial and restrictive unidimensional model on subject content that are unsatisfactory both theoretically and practically.
Presenters will share and discuss their first two years’ work, developing the Q-Matrices and analyzing the legacy data from the REMA using Q-matrix theory, the Rule Space Method, and poset-based adaptive testing methodologies. They seek critical commentary from participants at this still-early stage of development. At the general level, that will include their framework, theory, and analyses. A more detailed interaction results from sharing paper copies with all participants of one set of items, attributes, and the Q-Matrix that connects them and allow participants time to react and critique the specific assignments of attributes to items on that Q-Matrix.