James Lester, Director of the Center for Educational Informatics and Distinguished Professor of Computer Science, North Carolina State University
Research on assessment has experienced a sea change since CADRE published the New Measurement Paradigms report in 2012. Led by Mike Timms (ACER) and coordinated by Amy Busey (EDC) with Doug Clements (University of Denver), Janice Gobert (Rutgers), Diane Jass Ketelhut (University of Maryland), Debbie Reese (Wheeling Jesuit University), Eric Wiebe (North Carolina State University), and myself as collaborators, the report presented a snapshot of measurement methods featured in contemporary DRK–12 and REESE projects. It emphasized measurement methods from projects centering on intelligent learning environments, including methods that built on a strong psychometric foundation to investigate approaches such as embedded assessment and emerging techniques that drew on machine learning. Increasing adaptivity and expanding dimensions of student responses were pervasive themes running through the report, which painted a picture of a creative moment in time for assessment.
The years since the report was issued have been exciting indeed. Perhaps most striking is the astonishing acceleration of advances in AI and the promise it holds for the next generation of assessment. While the New Measurement Paradigms working group was prescient in recognizing machine learning as an enabling technology for assessment, the sheer magnitude of the inferential power that would soon be unleashed was not apparent. All of the cognitive qualities that designers of intelligent systems target—adaptivity, robustness, self-improvement—are now on the table for next-gen assessment.
AI technologies are now opening the door to assessing new kinds of constructs in new kinds of settings with new kinds of frameworks. As an example, my colleagues and I are now investigating how to leverage advances in multimodal learning analytics integrating machine learning and multichannel sensors to assess learners’ naturalistic engagement in museums (DRL-1713545). With a focus on assessing engagement in groups of museum visitors as they interact with science exhibits, we are using deep neural architectures to create predictive models of engagement from multimodal data channels, including facial expression, gesture, posture, and movement. Efforts to build multimodal assessment models designed to operate “in the wild” would have been very hard to imagine just a few years ago.
It seems certain that advances in AI will deeply inform assessment. Explorations of “AI-enhanced assessment” will quickly transition to “AI-driven assessment.” These forays—already underway—will commence as research efforts but will no doubt quickly make their way into the world to support the measurement of a growing constellation of constructs for a broad array of learner populations. As a data-intensive activity, the development of new approaches to assessment will be propelled by the virtuous cycle of large data collections yielding increasingly high-fidelity assessment models. At the same time, it will be imperative for AI-centric assessment frameworks to account for fairness, accountability, and transparency. Nevertheless, with the enormous potential these developments hold for helping learners and improving learning, it seems we may well be entering a golden age for assessment research.