- Harris, C., Krajcik, J., Pellegrino, J. & DeBarger, A, (2019). Designing Knowledge-In-Use Assessments to Promote Deeper Learning. Educational Measurement: Issues and Practice.
- Shin, N., Choi, S. Y., Stevens, S. Y., & Krajcik, J. S. (2019). The Impact of Using Coherent Curriculum on Students’ Understanding of Core Ideas in Chemistry. International Journal of Science and Mathematics Education, 17(2), 295-315.
- Bielik T., Damelin D., & Krajcik J. (2018). Why do Fishermen Need Forests? Developing a Project-Based Unit with Engaging Driving Question. Science Scope, 41(6), 64-72.*
- Damelin, D., Krajcik, J., McIntyre, C., & Bielik, T. (2017). Students making system models: An accessible approach. Science Scope, 40(5), 78-82.
- Krajcik, J. & Delen, I. (2017). How to support learners in developing usable and lasting knowledge of STEM. International Journal of Education in Mathematics, Science and Technology, 5(1), 21-28. DOI:10.18404/ijemst.
The Framework for K-12 Science Education has set forth an ambitious vision for science learning by integrating disciplinary science ideas, scientific and engineering practices, and crosscutting concepts, so that students could develop competence to meet the STEM challenges of the 21st century. Achieving this vision requires transformation of assessment practices from relying on multiple-choice items to performance-based knowledge-in-use tasks. However, these performance-based constructed-response items often prohibit timely feedback, which, in turn, has hindered science teachers from using these assessments. Artificial Intelligence (AI) has demonstrated great potential to meet this assessment challenge. To tackle this challenge, experts in assessment, AI, and science education will gather for a two-day conference at University of Georgia to generate knowledge of integrating AI in science assessment.