In formative assessment, constructed response questions are typically used for scientific argumentation, but students seldom receive timely feedback while answering these questions. The development of natural language processing (NLP) techniques makes it possible for the researchers using an automated scoring engine to provide real-time feedback to students. As is true for any new technology, it is still unclear how automated scoring and feedback may impact learning in scientific argumentation. In this study, we analyze log data to examine the granularity of students’ interactions with automated scores and feedback and investigate the association between various students’ behaviors and their science performance. We first recovered and visualize the pattern of students navigating through the argument items. Preliminary analyses show that most students did make use of the automated feedback. Checking feedback and making revisions also improved students’ final scores in most cases. We also cluster the activity sequences extracted from the time-stamped event log to explore patterns in students’ behavior.
Zhu, M., Liu, O. L., Mao, L., & Pallant, A. (2016). Use of Automated Scoring and Feedback in Online Interactive Earth Science Tasks. The Proceedings of the 2016 IEEE Integrated STEM Education Conference.