DRK-12 proposals are required to include “mechanisms to assess success through project-specific external review and feedback processes.” Questions often emerge around the most appropriate mechanisms for this assessment, how to differentiate between research and evaluation programs, and how to plan for and implement an effective evaluation.
The following resources from CADRE and other networks offer information and advice on the evaluation and external review of NSF-funded projects that may be helpful whether you are finalizing your submissions for this year’s DRK-12 solicitation, considering how external review of your current award can better support or inform your project activities, or interested in learning more about the possible roles that evaluators and external review boards can play.
- Snapshot of Active DRK-12 Awards: Evaluation & External Review
- Perspectives from the Field:
- The Role of Evaluation in Research Projects (Dan Hanley & Jessica Sickler)
- External Review of DRK-12 Projects (Catherine McCulloch with input from Kristin Bass, Kathy Haynie, Dan Heck, & David Reider)
- Evaluation Resource Collections
Snapshot of Active DRK-12 Awards: Evaluation & External Review
These data are from respondents to CADRE's 2022 Annual Survey of Active DRK-12 Awards (N=193) and do not represent the entirety of the active DRK-12 portfolio. The survey collected information regarding project research foci, audiences, and strategies for improving STEM teaching and learning. (Updated August 30, 2022)
Perspectives from the Field
The Role of Evaluation in Research Projects
CADRE recently discussed the role of evaluation and STEM education research with two scholars, Dr. Dan Hanley and Jessica Sickler, who are involved in both research AND evaluation for NSF-funded projects. We thought that they could provide helpful insights into the potential role of evaluation in current STEM education research projects. Since project proposals that are submitted to NSF must include a plan for external review in addition to the research plan, we hope these insights will be useful for those designing research projects.
The written summary includes key ideas from the conversation and their reflections on the following questions.
What are the distinctions and similarities between research and evaluation, particularly at a conceptual level?
- Evaluation is aimed at generating useful, valid information for local clients to help them improve their project activities.
- Research is aimed at generating new knowledge about phenomena to broaden our knowledge base.
But, in the broadest sense, both research and evaluation are systematic inquiries aimed at collecting rigorous data to answer questions and they use the similar quantitative and qualitative approaches. NSF projects need to be knowledge-based and knowledge generating.
Confusion between research and evaluation may arise from the foci or types of questions that they ask because they both can examine aspects of a project's implementation, or impacts, or contextual factors that influence the implementation or impacts of a project. For example, some foci are more geared towards evaluation, and even specific to evaluation, include examining:
- the management of a project to help a project team improve its communication or shared vision,
- the quality of interventions (especially those that are administered by project leaders or researchers), or
- the management and implementation of a project's research and data collection activities.
Evaluation results, while they are about a specific program, do contain information that can inform the development of other projects or related interventions. Evaluation also can serve an important role in examining the authentic inclusion and participation of relevant stakeholders... Read More
External Review of DRK-12 Projects
Each year when researchers are writing proposals, CADRE—the resource network for NSF’s DRK–12 program—receives requests for clarification on the use and choice of advisory boards and/or evaluators. The solicitation for the National Science Foundation’s Discovery Research K–12 (DRK–12) program states that “all DRL projects are subject to a series of external, critical reviews of their designs and activities (including their theoretical frameworks, any data collection plans, analysis plans, and reporting plans) … A proposal must describe appropriate mechanisms to assess success through project-specific external review and feedback processes. These might include an external review panel or advisory board proposed by the project or a third-party evaluator. The external critical review should be sufficiently independent and rigorous to influence the project's activities and improve the quality of its findings.”
When and how do you use one entity versus another to inform decision-making during a research project? CADRE contacted several DRK–12 evaluators to get their perspectives on this question, and this is what we heard:
There is overlap between advisory boards and external evaluators. Both can provide input and feedback about project activities. But there are differences between advisory boards and external evaluators that may help you determine which to use.
An advisory board provides nonbinding strategic input and feedback about project plans, activities, and results to a project team. While board members are typically research peers, increasingly members are drawn from target audiences (e.g., teachers) so that those perspectives are integrated into project decision-making and results are relevant to those stakeholder groups. An advisory board tends to meet once or twice per year and may be—by nature of its size (i.e., number of advisory board members), the level of compensation provided, and the frequency and duration of the members’ engagement—a less expensive option than working with an evaluator over the duration of the project... Read More