Hypothetical learning trajectories imply negotiation between teachers and students. In this session, researchers discuss how they validate learning trajectories under variable conditions and anticipate change in practice artifacts.
This session contributes to a methodological strand involving experienced researchers on learning trajectories and/or learning progressions (LTs/LPs). All these researchers have empirically demonstrated the potential of LT/LP work to broaden participation by leveraging student ideas and offering engaging task designs. Presenters share new LT research on computational thinking. This topical session asks participants to respond to critical questions on the topics of validity and change followed by audience questions and comments. Validation of LTs (conducted through qualitative analysis and/or measurement) is a process of argumentation linking one’s cognitive claims to forms of evidence gathering and to interpretation of results through warrant and evidence (Kane, 2006; Pellegrino, DiBello, & Goldman, 2016).
The presenters recognize that validation has strong ties to one’s sample and one’s purposes and models for use and implementation. During this session, a presenter from each project reports on how the project plans to or did conduct validation studies of its LT and how differing conditions of practice are handled in field testing and implementation. Participants are asked about their concerns about the trajectories for their LT/LPs: What is hypothetical about LTs? To what degree and how do you expect them to change or be adapted? Do you have a process for anticipating and accommodating change in the LT/LP and its artifacts of practice? Finally, participants are asked, How are your approaches to validation and change mutually complementary or in tension?
Kane, M. T. (2006). Content-related validity evidence in test development. In M. T. Haladyna & M. S. Downing (Eds.), Handbook of test development (pp. 131-153). Mahwah, NJ: Lawrence Erlbaum Associates.
Pellegrino, J. W., DiBello, L. V., & Goldman, S. R. (2016). A framework for conceptualizing and evaluating the validity of instructionally relevant assessments. Educational Psychologist, 51(1), 59-81.