Improving Evaluations of R&D in STEM Education

The primary goal of this set of workshops is to provide STEM education researchers with the framework, skills, and community they need to implement new developments in causal inference methods into their research.

Full Description

The primary goal of this set of workshops is to provide STEM education researchers with the framework, skills, and community they need to implement new developments in causal inference methods into their research. These methods will be immediately implementable in their current (or near future) studies and will result in stronger causal findings, providing higher-quality evidence regarding the potential of new innovations to improve STEM education broadly. Additionally, a secondary goal is to provide the graduate assistants at the workshop (students in statistics) with a strong foundation in the real-world problems facing researchers in STEM education today. By being immersed in this community, the goal is to improve their communication skills, while also providing them with opportunities to develop new methods that address problems facing the STEM education community today.

STEM education research and development studies often focus on the development and iterative refinement of interventions meant to increase STEM participation and skills. Since large-scale randomized experiments are not often possible, researchers typically use correlational methods instead to explore the effects of interventions. Over the past several years, however, statisticians have developed a broad array of methods for understanding causality that do not require these large-scale randomized trials. While these causal inference methods are now common in fields like medicine and education policy, they are much less commonly found in STEM education fields. The purpose of this set of workshops is to introduce STEM education researchers to these methods and how they relate to three research designs they already use: (1) matching on a single variable (e.g., age, gender), (2) pre-test post-test comparisons, and (3) lab experiments. In addition to introducing these new developments, broader discussions of confounding, validity types and trade-offs, design sensitivity, effect size reporting, and questionable research practices (e.g., p-hacking) will also be included.

PROJECT KEYWORDS
Target Audience
Project Type

Project Materials

Title Type Post date Sort ascending
No content available.