Assessment

Flip It: An Exploratory (Versus Explanatory) Sequential Mixed Methods Design Using Delphi and Differential Item Functioning to Evaluate Item Bias

The Delphi method has been adapted to inform item refinements in educational and psychological assessment development. An explanatory sequential mixed methods design using Delphi is a common approach to gain experts' insight into why items might have exhibited differential item functioning (DIF) for a sub-group, indicating potential item bias. Use of Delphi before quantitative field testing to screen for potential sources leading to item bias is lacking in the literature.

Author/Presenter
Kristin L.K. Koskey
Toni A. May
Yiyun “Kate” Fan
Dara Bright
Gregory Stone
Gabriel Matney
Jonathan D. Bostic
Year
2023
Short Description

The Delphi method has been adapted to inform item refinements in educational and psychological assessment development. An explanatory sequential mixed methods design using Delphi is a common approach to gain experts' insight into why items might have exhibited differential item functioning (DIF) for a sub-group, indicating potential item bias. Use of Delphi before quantitative field testing to screen for potential sources leading to item bias is lacking in the literature. An exploratory sequential design is illustrated as an additional approach using a Delphi technique in Phase I and Rasch DIF analyses in Phase II. We introduce the 2 × 2 Concordance Integration Typology as a systematic way to examine agreement and disagreement across the qualitative and quantitative findings using a concordance joint display table.

Flip It: An Exploratory (Versus Explanatory) Sequential Mixed Methods Design Using Delphi and Differential Item Functioning to Evaluate Item Bias

The Delphi method has been adapted to inform item refinements in educational and psychological assessment development. An explanatory sequential mixed methods design using Delphi is a common approach to gain experts' insight into why items might have exhibited differential item functioning (DIF) for a sub-group, indicating potential item bias. Use of Delphi before quantitative field testing to screen for potential sources leading to item bias is lacking in the literature.

Author/Presenter
Kristin L.K. Koskey
Toni A. May
Yiyun “Kate” Fan
Dara Bright
Gregory Stone
Gabriel Matney
Jonathan D. Bostic
Year
2023
Short Description

The Delphi method has been adapted to inform item refinements in educational and psychological assessment development. An explanatory sequential mixed methods design using Delphi is a common approach to gain experts' insight into why items might have exhibited differential item functioning (DIF) for a sub-group, indicating potential item bias. Use of Delphi before quantitative field testing to screen for potential sources leading to item bias is lacking in the literature. An exploratory sequential design is illustrated as an additional approach using a Delphi technique in Phase I and Rasch DIF analyses in Phase II. We introduce the 2 × 2 Concordance Integration Typology as a systematic way to examine agreement and disagreement across the qualitative and quantitative findings using a concordance joint display table.

Examining the Influence of COVID-19 on Elementary Mathematics Standardized Test Scores in a Rural Ohio School District

In the United States, national and state standardized assessments have become a metric for measuring student learning and high-quality learning environments. As the COVID-19 pandemic offered a multitude of learning modalities (e.g., hybrid, socially distanced face-to-face instruction, virtual environment), it becomes critical to examine how this learning disruption influenced elementary mathematic performance.

Author/Presenter

Dara Bright

Yiyun “Kate” Fan

Chris Fornaro

Kristin L. K. Koskey

Toni A. May

Jonathan D. Bostic

Dolores Swineford

Year
2022
Short Description

In the United States, national and state standardized assessments have become a metric for measuring student learning and high-quality learning environments. As the COVID-19 pandemic offered a multitude of learning modalities (e.g., hybrid, socially distanced face-to-face instruction, virtual environment), it becomes critical to examine how this learning disruption influenced elementary mathematic performance. This study tested for

differences in mathematics performance on fourth grade standardized tests before and during COVID-19 in a case study of a rural Ohio school district using the Measure of Academic Progress (MAP) mathematics test.

Examining the Influence of COVID-19 on Elementary Mathematics Standardized Test Scores in a Rural Ohio School District

In the United States, national and state standardized assessments have become a metric for measuring student learning and high-quality learning environments. As the COVID-19 pandemic offered a multitude of learning modalities (e.g., hybrid, socially distanced face-to-face instruction, virtual environment), it becomes critical to examine how this learning disruption influenced elementary mathematic performance.

Author/Presenter

Dara Bright

Yiyun “Kate” Fan

Chris Fornaro

Kristin L. K. Koskey

Toni A. May

Jonathan D. Bostic

Dolores Swineford

Year
2022
Short Description

In the United States, national and state standardized assessments have become a metric for measuring student learning and high-quality learning environments. As the COVID-19 pandemic offered a multitude of learning modalities (e.g., hybrid, socially distanced face-to-face instruction, virtual environment), it becomes critical to examine how this learning disruption influenced elementary mathematic performance. This study tested for

differences in mathematics performance on fourth grade standardized tests before and during COVID-19 in a case study of a rural Ohio school district using the Measure of Academic Progress (MAP) mathematics test.

AI for Tackling STEM Education Challenges

Artificial intelligence (AI), an emerging technology, finds increasing use in STEM education and STEM education research (e.g., Zhai et al., 2020b; Ouyang et al., 2022; Linn et al., 2023). AI, defined as a technology to mimic human cognitive behaviors, holds great potential to address some of the most challenging problems in STEM education (Neumann and Waight, 2020; Zhai, 2021). Amongst these is the challenge of supporting all students to meet the vision for science learning in the 21st century laid out, for example in the U.S.

Author/Presenter

Xiaoming Zhai

Knut Neumann

Joseph Krajcik

Year
2023
Short Description

To best support students in developing competence, assessments that allow students to use knowledge to solve challenging problems and make sense of phenomena are needed. These assessments need to be designed and tested to validly locate students on the learning progression and hence provide feedback to students and teachers about meaningful next steps in their learning. Yet, such tasks are time-consuming to score and challenging to provide students with appropriate feedback to develop their knowledge to the next level.

AI for Tackling STEM Education Challenges

Artificial intelligence (AI), an emerging technology, finds increasing use in STEM education and STEM education research (e.g., Zhai et al., 2020b; Ouyang et al., 2022; Linn et al., 2023). AI, defined as a technology to mimic human cognitive behaviors, holds great potential to address some of the most challenging problems in STEM education (Neumann and Waight, 2020; Zhai, 2021). Amongst these is the challenge of supporting all students to meet the vision for science learning in the 21st century laid out, for example in the U.S.

Author/Presenter

Xiaoming Zhai

Knut Neumann

Joseph Krajcik

Year
2023
Short Description

To best support students in developing competence, assessments that allow students to use knowledge to solve challenging problems and make sense of phenomena are needed. These assessments need to be designed and tested to validly locate students on the learning progression and hence provide feedback to students and teachers about meaningful next steps in their learning. Yet, such tasks are time-consuming to score and challenging to provide students with appropriate feedback to develop their knowledge to the next level.

AI for Tackling STEM Education Challenges

Artificial intelligence (AI), an emerging technology, finds increasing use in STEM education and STEM education research (e.g., Zhai et al., 2020b; Ouyang et al., 2022; Linn et al., 2023). AI, defined as a technology to mimic human cognitive behaviors, holds great potential to address some of the most challenging problems in STEM education (Neumann and Waight, 2020; Zhai, 2021). Amongst these is the challenge of supporting all students to meet the vision for science learning in the 21st century laid out, for example in the U.S.

Author/Presenter

Xiaoming Zhai

Knut Neumann

Joseph Krajcik

Year
2023
Short Description

To best support students in developing competence, assessments that allow students to use knowledge to solve challenging problems and make sense of phenomena are needed. These assessments need to be designed and tested to validly locate students on the learning progression and hence provide feedback to students and teachers about meaningful next steps in their learning. Yet, such tasks are time-consuming to score and challenging to provide students with appropriate feedback to develop their knowledge to the next level.

Applying Machine Learning to Automatically Assess Scientific Models

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners.

Author/Presenter

Xiaoming Zhai

Peng He

Joseph Krajcik

Year
2022
Short Description

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners. This study utilized machine learning (ML), the most advanced artificial intelligence (AI), to develop an approach to automatically score student-drawn models and their written descriptions of those models.

Applying Machine Learning to Automatically Assess Scientific Models

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners.

Author/Presenter

Xiaoming Zhai

Peng He

Joseph Krajcik

Year
2022
Short Description

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners. This study utilized machine learning (ML), the most advanced artificial intelligence (AI), to develop an approach to automatically score student-drawn models and their written descriptions of those models.

Applying Machine Learning to Automatically Assess Scientific Models

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners.

Author/Presenter

Xiaoming Zhai

Peng He

Joseph Krajcik

Year
2022
Short Description

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners. This study utilized machine learning (ML), the most advanced artificial intelligence (AI), to develop an approach to automatically score student-drawn models and their written descriptions of those models.