Developing and Evaluating Assessments of Problem-Solving in Computer Adaptive Testing Environments (Collaborative Research: Sondergeld)

The Common Core State Standards for Mathematics (CCSSM) problem-solving measures assess students’ problem-solving performance within the context of CCSSM math content and practices. This project expands the scope of the problem-solving measures use and score interpretation. The project work advances mathematical problem-solving assessments into computer adaptive testing. Computer adaptive testing allows for more precise and efficient targeting of student ability compared to static tests.

Full Description: 

Problem solving has been a priority within K-12 mathematics education for over four decades and is reflected throughout the Common Core State Standards for Mathematics (CCSSM) initiative, which have been adopted in some form by 41 states. Broadly defined, problem solving involves the mathematical practices in which students engage as they solve intellectually-challenging mathematical tasks. In prior research, problem-solving measures aligned to CCSSM for grades 3-5 were developed and validated to supplement previously established problem-solving measures in grades 6-8. The problem-solving measures assess students’ problem-solving performance within the context of CCSSM math content and practices. This project expands the scope of the problem-solving measures use and score interpretation. The project work advances mathematical problem-solving assessments into computer adaptive testing. Computer adaptive testing allows for more precise and efficient targeting of student ability compared to static tests. Few measures designed to assess students’ mathematical problem-solving ability use this technology. Shorter tests require less in-class time for assessment than current paper-pencil problem-solving measures and increase classroom instruction time. The computer-adaptive problem-solving measures have sufficient reliability and strong validity evidence, and may limit test-taker fatigue. Finally, the project will benchmark current grades 6-8 instruments using an objective standard-setting method, which allows for improved score interpretations with content-related feedback. Immediate results of student- and class-level reports will be produced through the computer adaptive testing system allowing for teachers to modify instruction to improve students’ learning.

This five-year project aims to advance the use of computer adaptive testing and assessment development for use in mathematics instruction. The project applies an iterative and stakeholder-informed design science-based methodology as well as employs the use of Rasch modeling for the psychometric analysis during item development and validation. The project aims to: (a) benchmark the previously established grades 6-8 problem-solving measures; (b) develop, calibrate, and validate criterion-referenced computer adaptive testing for each measure; (c) construct student- and class-level score reports for integration into the computer adaptive testing system; and (d) investigate teachers’ capacity for implementing, interpreting, and using the assessments and results in STEM learning settings. The project addresses the following set of research questions: (RQ1) What benchmark performance standards define different proficiency levels on problem-solving measures for each grade level? (RQ2) What are the psychometric properties of new problem-solving measures items developed for the computer adaptive testing item bank? (RQ3) Is there significant item drift across student populations on the new problem-solving measure items? (RQ4) To what extent are problem-solving measures item calibrations stable within the computer adaptive testing system? (RQ5) What recommendations for improvements do teachers and students have for the new problem-solving measures items, computer adaptive testing platform and reporting system, if any? (RQ6) To what extent do teachers interact with, perceive, and make sense of the assessment information generated for use in practice? and (RQ7) Does an online learning module build teacher capacity for problem solving measures, computer adaptive testing implementation, interpretation, and use of student assessment outcomes in STEM learning settings? An experimental design will be utilized to investigate teachers’ capacity for implementing, interpreting, and using problem solving measures in a computer adaptive testing system. The project has the potential to impact the field by providing school districts and researchers a means to assess students’ mathematical problem-solving performance at one time or growth over time efficiently and effectively; address future online learning needs; and improve classroom teaching through more precise information about students’ strengths with less class time focused on assessment.

Project Materials

There is no content in this group.