Methods for Assessing Replication

The goal of this project is to formalize subjective ideas about the important concept of replication, provide statistical analyses for evaluating replication studies, provide properties for evaluating the conclusiveness of replication studies, and provide principles for designing conclusive and efficient programs of replication studies.

Lead Organization(s): 
Award Number: 
1841075
Funding Period: 
Saturday, September 1, 2018 to Tuesday, August 31, 2021
Full Description: 

Replication of prior findings and results is a fundamental feature of science and is part of the logic supporting the claim that science is self-correcting. However, there is little prior research on the methodology for studying replication. Research involving meta-analysis and systematic reviews that summarizes a collection of research studies is more common. However, the question of whether the findings from a set of experimental studies replicate one another has received less attention. There is no clearly defined and widely accepted definition of a successful replication study or statistical literature providing methodological guidelines on how to design single replication studies or a set of replication studies. The research proposed here builds this much needed methodology.

The goal of this project is to formalize subjective ideas about the important concept of replication, provide statistical analyses for evaluating replication studies, provide properties for evaluating the conclusiveness of replication studies, and provide principles for designing conclusive and efficient programs of replication studies. It addresses three fundamental problems. The first is how to define replication: What, precisely, should it mean to say that the results in a collection of studies replicate one another? Second, given a definition of replication, what statistical analyses should be done to decide whether the collection of studies replicate one another and what are the properties of these analyses (e.g., sensitivity or statistical power)? Third, how should one or more replication studies be designed to provide conclusive answers to questions of replication? The project has the potential for impact on a range of empirical sciences by providing statistical tools to evaluate the replicability of experimental findings, assessing the conclusiveness of replication attempts, and developing software to help plan programs of replication studies that can provide conclusive evidence of replicability of scientific findings.

Posts

There is no content in this group.