The Effect of Automated Feedback on Revision Behavior and Learning Gains in Formative Assessment of Scientific Argument Writing

Application of new automated scoring technologies, such as natural language processing and machine learning, makes it possible to provide automated feedback on students' short written responses. Even though many studies investigated the automated feedback in the computer-mediated learning environments, most of them focused on the multiple-choice items instead of the constructed response items. This study focuses on the latter and investigates a formative feedback system integrated into an online science curriculum module teaching climate change. The feedback system incorporates automated scoring technologies to support students' revision of scientific arguments. By analyzing the log files from the climate module, we explore how student revisions enabled by the formative feedback system correlate with student performance and learning gains. We also compare the impact of generic feedback (context-independent) vs. contextualized feedback (context-dependent). Our results showed that (1) students with higher initial scores on average were more likely to revise after the automated feedback, (2) revisions were positively related to score increases, and (3) contextualized feedback was more effective in assisting learning. The findings of this study provide insights into the use of automated feedback to improve scientific argumentation writing as part of classroom instruction.

Zhu, M., Liu, O. L., & Lee, H. (2019). The effect of automated feedback on revision behavior and learning gains in formative assessment of scientific argument writing. Computers & Education, 143.