The Use of Pictorial Supports as an Accommodation for Increasing Access to Test Items for Students with Limited Proficiency in the Language of Testing
This paper reports on an NSF-funded project that examines vignette illustrations (VIs) as a form of testing accommodation for English language learners (ELLs)—students who are developing English as a second language yet they are tested in English, in major assessment programs in the U.S. VIs are pictorial supports intended to make the content
of test items more accessible to ELLs without altering their text and without giving away their answers. We have developed a procedure for systematically designing VIs. Based on semiotics, socio-cultural theory, and cognitive science, our procedure allows identification of both linguistic/cultural challenges—constituents (words, phrases, terms, idiomatic expressions) which may pose challenges to ELLs due to their limited English proficiency or their limited experience with certain contextual information)—and linguistic/cultural affordances (constituents that are not likely to pose these challenges to ELLs). Based on the identified linguistic and cultural challenges and affordances, illustration development teams composed by bilingual teachers, science teachers, and science content experts, write scripts that specify the characteristics that the illustrations should have. The paper discusses the procedure for developing VIs and discusses the potential of VIs as a valid, cost-effective, easy-to-implement testing accommodation in multilingual and multicultural contexts in which student language proficiency in the language of testing is a potential threat to test validity.
In this paper, we report on a study that compares state, national, and international assessment programs as to the characteristics and functions of the illustrations used in their science test items. We used our conceptual framework for examining the characteristics of illustrations in science items (Solano-Flores & Wang, 2009, 2011) to code the illustrations of samples of items. We examined the statistical significance of differences in the frequencies of different illustration variables observed in samples of science items from assessments from two countries (China and the U.S,) in four science areas, physics, chemistry, biology, and earth and space science. We observed statistically significant differences between the numbers of features in the illustrations originated in China and the illustrations originated in the U.S. Illustrations from China tended to have more varied and complex characteristics than their U.S. counterparts. We discuss the implications of these findings in the design of science items in assessment projects that involve culturally and linguistically diverse populations in both the U.S. and in the context of international test comparisons.
This list of ELL resources is a working document prepared by CADRE for the ELL Working Group. New resources will be added as they are identified. This list includes the citation and the article abstract. Please do not circulate or quote this list of resources. Sources are organized alphabetically by STEM Content area (Science, Technology, Engineering, and Math), and by the general education topic explored in the paper (assessment, curriculum, instruction, language, professional development, system/policy, technology, and writing). Articles related to technology are listed twice, once within each content area as the topic, and also separated out at the end for technology for ease of searching.
Our searches utilized the ERIC and EBSCO databases (fn 1) using the following search terms: “math” or “math educat*” (fn 2) or “science” or “science educat*” in combination with "English Language Learner," ELL, "Dual Language Learner", DLL, bilingual, "Limited English Proficient," LEP, ESOL, or "English Speakers of Other Languages." Math searches in the ERIC and EBSCO databases included all literature published since 1966. As the science searches were intended to update a pre-existing literature synthesis (Lee, 2005), we looked for peer-reviewed articles published between 2005 and the present.
To insure that we had located the key articles, we ran searches on specific journals of interest including: Elementary School Journal, the Harvard Educational Review, the Journal of Research in Science Teaching Science Education, the Journal of Research in Mathematics Education, and the journals published by AERA. These follow-up searches focused on articles published since 2000. Furthermore, as recommended by Ohkee Lee, we ran specific google searches on three authors (L. Khisty, J. Moschkovich, and R. Gutierrez) who focus on issues of ELL and math education. Additional articles were recommended by PIs.
Mathematics Teachers Teaching English Language Learners: What Knowledge Do They Need? (Driscoll, Heck, Chval)
To investigate the contribution of the DR K‐12 portfolio to the knowledge base on math and sciencelearning among ELLs, CADRE designed a study to explore the ELL work that is being conducted in theDR K‐12 projects. This paper summarizes the work of this study. It begins with a description of themethodology employed, followed by a presentation of the findings, and finally a discussion of theconclusions drawn from this work. The findings are organized into discussions about the expertiseheld by the DR K‐12 ELL researchers and research teams, the characteristics of the ELL researchbeing conducted by the DR K‐12 projects, and a comparison of the research conducted by the DR K‐12 projects with published research on ELL‐science education and ELL‐math education.
Including English Language Learners in the Process of Test Development: A Study on Instrument Linguistic Adaptation for Cognitive Validity
This paper reports preliminary results from an investigation, still in progress, on the use of verbal protocols among native Spanish-speaking, English language learners (ELLs) of various proficiency levels and background characteristics. We focus on language use among ELLs during various stages of a cognitive interview designed to probe whether and how students
benefitted from the inclusion of illustrations as a form of testing accommodation. While the majority of students did not use their native language, 29% of participants drew from their native language to convey their thoughts. These students varied considerably in their patterns of use of the two languages at different parts of the cognitive interviews. Our findings are consistent with research in the field of bilingualism. First, bilingual individuals vary tremendously in their patterns of use of two languages across different contexts. Second, bilingual individuals continually use their two languages when performing cognitive tasks, even if the tasks are given in only one of the languages and the individuals are expected to provide their responses only in that language. In addition, even ELLs who are classified as non- or limited-English proficient are capable of providing valuable information in English on their interpretation of test items.We discuss how these findings can be used to ensure the participation of ELLs in talk-aloud protocols as part of the cognitive validty procedures used in large-scale test development.
Illustrations with Graphic Devices in Large-Scale Science Assessments: An Exploratory Cross-Cultural Study of Students’ Interpretations
In this exploratory, cross-cultural study, we examined students’ interpretations of graphic devise-based illustrations used in science tests. Graphic devices are visual components (e.g., arrows, dotted lines) intended to ensure proper understanding of the scientific processes or phenomena represented by the illustrations. We address cultural differences in terms of the interaction of two factors, students’ country of origin and items’ country of origin. We hypothesized that interpretations made by students of device-based illustrations are more accurate for items generated in their own country than items generated in another country. Two matched samples of American college students who lived and studied in the U.S. (n=40) and Chinese college students who lived and studied in mainland China (n=40) were given illustrations from eight science items whose illustrations contained different sorts of graphic devices; four of those items were sampled from Chinese large-scale assessments and four from American large-scale assessments. For each illustration,
students were asked: (1) to describe what they saw in the illustration, and (2) whether they thought the illustration represented a scientific concept and, if so, to describe which scientific concept was represented. The accuracy of the responses was scored based on scoring rubrics developed for each item. The results indicate that: (1) some illustrations were more difficult to interpret accurately than others, regardless of the students’ or the items’ country of origin; (2) Chinese students had more accurate interpretations than their American counterparts of the scientific concepts represented by the illustrations; and (3) students’ interpretations of the scientific concepts illustrated were more accurate for items generated in the students’ own culture than items generated in the other culture. We discuss lessons learned from this exploratory study and future directions for a full study.