Do teachers learn from classroom video clips on an online professional development site beyond what they indicate in the posted commentary? By comparing quantitative web usage data plus coded online video comments to qualitative interview data from 41 users of the Everyday Mathematics Virtual Learning Community (VLC), we hope to understand what, and to what extent, individual users learn from video online. This contributes to the growing body of research on the efficacy and feasibility of online teacher professional development (OTPD) and the design of actionable ideas for increasing participation and reflection on these sites.
In traditional settings, video-based learning has become an integral component of professional development (Ball & Cohen, 1999; Seago, 2004) for both pre-service (Chval et al., 2009; Sun & van Es, 2015) and in-service teachers (Borko et al., 2008; Santagata, 2009; Sherin & van Es, 2009). However, less is known about the efficacy and feasibility of video-based OTPD compared to traditional models (Borko et al., 2009). Recently, researchers have called for more empirical work investigating the complexities of learning in OTPD contexts (Dede et al., 2009; Moon et al., 2014).
The VLC, with approximately 44,000 members, serves as the context for this study. Because teachers’ analysis of video has been positively associated with their use of effective practices (Sherin & van Es, 2009; Sun & van Es, 2015) and with student learning (Kersting et al., 2010, 2012), previous work on the VLC has used teachers’ comments to index learning on the site (Bates et al., 2016; Beilstein et al., 2017). These studies revealed that although little evidence of deep analysis exists online, as measured by user comments posted publicly in response to video (Bates et al., 2016), teachers can produce more deeply analytical commentary offline in response to video (Beilstein et al., 2017).
The current study seeks to understand why deep analysis may—or may not—make it to the online space. To find patterns among the data sets, as well as inconsistencies across them, the data from web analytics, online comments, and interviews were consolidated in a joint display (e.g., Lee & Greene, 2007).
Findings from the interviews reveal teacher learning beyond the quantitative data. In some instances, there is convergence among the interview, web analytics, and coded commentary data. These interviews reinforce what the web data suggest, while also shedding light on the users’ motivations for watching specific videos, as well as how the videos inform their practice. Other interviews, however, provide a different picture, revealing deep reflection that does not appear in posted commentaries.
Although web analytics can provide OTPD developers with a wealth of data on user behaviors and preferences, they also can miss important aspects of teacher responses to video. Below the surface-level commentary found on the VLC, teachers are reflecting deeply on their practice. The interviews uncovered underlying motivations that guide teachers’ use of videos, barriers that prevent them from actively participating in online discussions on the VLC, and reflections that they withheld from public posting.