ICT Baseline Assessment

The information and communication technology (ICT) baseline assessment measured current competency of grades 9, 10, and 11 students on the Program-developed materials that include standards, teacher guides, and use of the ICT student textbooks (STB). The Program customized the STB for Uzbekistan from an internationally sourced series of ICT materials originating from Cambridge University Press. The assessment was conducted with students across these three grades in two distinct regions (Namangan and Sirdaryo) of Uzbekistan.

Additional Analysis for Self-Administered EGRA (Ghana, English)

This report summarizes the findings of additional analyses conducted to delve deeper and develop more insight into the piloting of the Self-Administered Early Grade Reading Assessment (SA-EGRA) and the Self-Administered Early Grade Mathematics Assessment (SA-EGMA). These tools were developed and tested by RTI International with the support and direction of Imagine Worldwide. Children complete these assessments independently on tablet-based software while in a classroom with their peers. An adult supervises the process.

ICT Endline Assessment

This information and communication technology (ICT) assessment measured the competency of 1,244 grade 9, 10, and 11 students from the beginning to the end of the 2022–2023 school year in two regions. This longitudinal study measured the impact of student learning outcomes over the course of a year. The U.S. Agency for International Development (USAID) Uzbekistan Education for Excellence Program conducted a baseline assessment at the beginning of the school year in September 2022 and an endline assessment at the end of the school year in May 2023. These assessments included a knowledge examination of the content of newly introduced ICT standards and textbooks. The examination was based on standards and proficiency was defined as 78%, 79%, and 77% correct responses in the assessment for grades 9, 10, and 11 respectively by the subject matter specialists in ICT. Additionally, to better measure the progress during the early years of implementation of the new curriculum, the Program also set intermediate proficiency levels at 50%, 40%, and 40% for grades 9, 10, and 11 respectively.

Jordan Early Grade Reading and Mathematics Initiative (RAMP) Final National Survey Report 2023

In May 2023, the MoE’s Examination and Test Managing Directorate (ETMD), with technical support from Jordan's RAMP initiative, funded by USAID and UKAID, conducted a final national survey over 8 years (2015-2023) to measure RAMP’s impact and the impacts of remedial programs addressing students’ learning loss during epidemic-related school closures. The study included 2,181 schools and approximately 244,389 G2 and G3 students, encompassing both Syrian and refugee camp schools. Using previous surveys as benchmarks, the study revealed substantial improvements in reading and mathematics skills for G2 and G3 students in 2023 compared to 2019 and 2021. Notably, G2 students in MoE traditional schools showed remarkable progress, with reading proficiency increasing from 10.7% in 2021 to 42.4%, while G3 students improved from 39.4% to 60.3%. Similar improvements were seen in Syrian refugee schools, with G2 students' reading proficiency rising from 7.2% in 2021 to 36.3% in Syrian day schools and from 4.1% to 15.8% in refugee camp schools. G3 students in Syrian day schools improved from 43.9% to 51.6%, and those in refugee camp schools increased from 15.9% to 29.6%. Notably, there was a decrease in students receiving zero scores in oral reading fluency (ORF) in MoE traditional schools, with G2 students dropping from 21.3% in 2021 to 4.2% in 2023. Similar improvements were observed in Syrian schools, as G2 students in Syrian day schools decreased from 26.1% to 7.7%, and those in refugee camp schools decreased from 69.4% to 22.2%. Regarding mathematics, improvements were noted in 2023: G2 from 6.1% to 13.7%, and G3 from 18.4% to 29.3%. However, there was no progress compared to 2019, with G2 at 13.7% (down from 18.7%) and G3 stable at 29.3% (up from 29.2%). The report emphasized recommendations, including supporting low-performing schools, implementing specialized programs to engage parents in supporting their children's learning, particularly in mathematics. It highlighted the need for teacher training in mathematics, increasing weekly mathematics lessons, and assigning specialized mathematics teachers for early grades. Further suggestions encompassed continuous capacity-building for teachers and supervisors, a focus on effective assessment methodologies, and fostering professional accountability. The report underscored the importance of practical, in-person teacher training and the necessity for ongoing monitoring and evaluation to drive improvements in early-grade education.

Jordan reflective approach builds a more resilient education system [CIES 2023 Presentation]

The USAID-FCDO Jordan Early Grades Reading & Mathematics Initiative (RAMP) started in 2015 with the goal to improve Kg-Gr.3 students reading and mathematics skills through improving curriculum system coherency, teacher professional development and coaching, increasing parental involvement, and improving standards, evaluation, monitoring, and accountability systems. The presentation is about how RAMP built a resilient MOE early grades system that could mitigate the learning loss caused after the school closure as a measure of the spread of COVID-19. It was notable the early grades school system was more resilient than the upper levels, administrators and teachers were more ready to cope with a new context where children reading and mathematics skills were varying widely: the MOE was able to rapidly implement a national survey (EGRA/EGMA) to measure learning losses and design a remedial program; teachers were able to use diagnostic assessment tools and identified individual students actual learning needs; teachers were familiar with differentiated instruction and remedial strategies for vulnerable children; and a system was already in place to regularly coach teachers in under-served schools/areas.

Developing school-level instruments for better understanding effective numeracy instruction at scale [CIES 2023 Presentation]

While there has been substantial investment in early-grade reading in low- and middle-income country contexts (LMICs) in the last 15 years, and a concomitant increase in evidence around what works to improve reading outcomes, there has been much more limited investment in early-grades mathematics. As a result, the body of evidence on what works to improve mathematics teaching and learning in LMICs is more limited. This study has identified six government- and program-led interventions in LMICs that have evidence of impact on students’ numeracy outcomes and are working at scale, to understand how and why they are effective and consolidate that evidence for the international education community. In order to examine the target programs, the study team has developed a suite of instruments designed to examine the programs and to identify common elements that these successful numeracy may have in common. The goal in designing these instruments was to be able to examine a range of potential factors, based on the evidence that we have on mathematics teaching and learning from research in high-income country contexts, as well as the limited research evidence we have from LMICs. This suite of instruments includes: (1) a quantitative classroom observation instrument, based on multiple frameworks for high-quality math instruction, including work by The Danielson Group (2019), The University of Michigan’s High Leverage Teaching practices, and a cross-institutional working group of math education experts working in LMICs (co-author, 2019); (2) a student cognitive interview instrument intended to provide insight into students’ development of higher order, conceptual understanding of basic mathematics concepts; (3) a qualitative classroom observation instrument and accompanying lesson-based teacher interview; (4) a survey of teachers’ Mathematical Knowledge for Teaching, based on work by Deborah Ball (2011). Focusing primarily on the quantitative classroom observation and student cognitive interview instruments, this paper will present the theoretical foundations of the instrument and the processes for developing, piloting, and adapting the instruments for different country and program contexts. Preliminary findings and lessons learned from utilizing the tools for data collection across country contexts will also be shared. Given the need to expand the body of evidence around what works to improve math teaching and learning, these instruments represent potentially valuable resources for research in this area – and the authors look forward to discussing the potential for use and further development/adaptation.

Using the GPF to create mathematics student standards in Uzbekistan [CIES 2023 Presentation]

To support the Ministry of Public Education (MoPE) in achieving its reform agenda, USAID initiated the Uzbekistan Education for Excellence Program (UEEP) to improve the quality of education and enable all students to be proficient in 21st century skills such as problem solving and critical thinking. UEEP is implemented by a Consortium of partners including RTI International, Florida State University, and Mississippi State University. The Program aims to achieve three overarching results: improved Uzbek language (UL) reading and mathematics outcomes in grades 1–4; enhanced information and communication technology (ICT) instruction for grades 1–11, and improved English as a foreign language (EFL) instruction in grades 1–11. In this presentation, we will describe the process of revising the student standards for Grades 1-4 in mathematics, led by Florida State University and UEEP math experts, with ongoing review and feedback from MoPE representatives. The revision of student standards was the first step of the country’s curricular reform, followed by the creation of TLMs and a corresponding teacher professional development package. This process consisted of several stages, beginning with comparing current MoPE standards with the GPF, student math standards from South Korea and the TIMSS framework for assessment. These three resources were chosen for specific purposes. The TIMSS framework reflected the Government of Uzbekistan’s priority to prepare children in Grade 4 to take TIMSS assessment and perform well. This framework only reflected Grade 4 learning, so it was used primarily to backwards map other standards in Grades 1-3 to ensure that all content on the TIMSS framework was adequately covered by the end of Grade 4. The GPF was the most detailed reference, and was used to map the current standards and identify gaps in the progression. However, there was concern from the Government of Uzbekistan that the GPF might be targeting skills that are too easy for children in Uzbekistan. Because of this, we also brought in the South Korea standards, which provided us with a reference from a country that is seen as a model in Uzbekistan, and we were able to use the South Korea standards to make decisions about certain skills. This process allowed us to ensure that the standards development process met national priorities (i.e. Presidential Decree on Improving Math Education, 2020), reflected best international practices in mathematics, created standards that were age appropriate and measurable, and was logically organized. We will discuss how we used a quantitative comparative analysis of Uzbekistan's existing standards for different grade levels with the GPF and South Korea standards to identify revisions needed to facilitate more alignment. The rigor of that exercise allowed us to revise and to create a set of standards that could be approved by MoPE. The rigorous student standards development process described in this presentation allowed us to create a set of standards that were approved by MoPE. These standards were the cornerstone of further reforms including creating a scope and sequence and Teacher/Learning Materials (TLMs) for Grades 1-4, currently being piloted with 10,000 teachers across Uzbekistan.

Multi-Language Assessment (MLA) for young children: A screener to understand language assets [CIES 2023 Presentation]

The lack of information about children’s oral language skills limits our understanding of why some children do not respond to literacy instruction. Even though native language oral language skills are not strong predictors of native language decoding (Durgunoglu et al., 1993; Lesaux et al., 2006), oral language skills have been shown to have a small role in non-native word reading for non-native speakers (Geva, 2006; Quiroga et al., 2002). Yet, understanding that threshold of language skills is not understood. Cross-linguistic studies show that some literacy skills transfer between languages (Abu-Rabia & Siegel, 2002; Bialystok, McBride-Chang, & Luk, 2005; Cisero & Royer, 1995; Comeau, Cormier, Grandmaison, & Lacroix, 1999; Denton et al., 2000; Durgunoglu, 2002; Durgunoglu et al., 1993; Genesee & Geva, 2006; Gottardo, Yan, Siegel, & Wade-Woolley, 2001; Koda, 2007; Wang et al., 2006). This includes letter knowledge, print concepts, and language skills (phonological awareness and vocabulary). The transfer of these skills is considered a resource (Genesee, Geva, Dressler, & Kamil, 2006) that assists reading in the additional languages. Children learning to read in a non-native language bring their first language (i.e., mother tongue) to the instructional setting. Yet, its use will depend on the teacher’s use translanguaging between the language of instruction and children’s home language (s). The presence of two or more languages contributes to children having domains of knowledge in specific languages. For example, domains of knowledge children learn at school such as shapes, might only be known in the language of instruction. Relatedly, domains of knowledge they learn at home from family interactions, such as cooking, might only be known in the mother-tongue. And domains of knowledge that children learn on the playground, are likely to be learned in a lingua franca, or a common language to the area. Even though mixing languages is common, most language assessments do not capture this knowledge. Even in samples with multi-lingual students, for reasons of reliability and consistency, most language assessments assess children in just one language and describe results for that language. The results are used to help to explain results on reading assessments. But measuring language skills in just one language overlooks the concepts that a multilingual child may have in other languages and describe them from a deficit approach opposed to the asset of being multi-lingual. To address this problem, we developed a tool, the Multi-Language Assessment (MLA), to measure children’s expressive language across multiple languages to understand the skills they have to support their learning. The tool is intended to be reliable, valid, and child friendly. It is our hypothesis that children’s expressive language scores across multiple languages can help to explain their success or struggles in the early years of formal schooling. We conceptualized and developed the Multi-Language Assessment (MLA) to capture children’s language skills across multiple languages in a 7-minute interaction with a trained assessor. The MLA measures expressive language of 36 concepts shown in 36 images that children would be exposed to through family and community interactions, conversations, media, books, or school. The items included in the assessment yield variable distribution; they are not intended to be items that would yield ceiling effects. A child’s utterance is coded to one of nine categories of varying weights. Furthermore, the items are intended to have levels of a familiarity. For example, for an image of a coconut tree, some children might call it that, while other children describe it by its domain, a tree, not identifying the specific type. Both utterances would earn a child points but of varying weight. Research Problem 1. Many children in low- and middle-income contexts do not learn to read in lower primary efficiently. 2. Some people hypothesize that the reason for children’s poor performance is language related. 3. When children’s language skills are assessed it is usually in one language and describes their abilities as deficits as opposed to considering their assets of being multi-lingual. 4. Assessing language requires time and young children’s attention spans are short, reducing data quality if the assessment is too long. 5. The MLA was created to understand expressive language use across multiple languages and to be brief Study: The paper presents results from a recent longitudinal study that collected child level results at two time points in government school in rural Kenya. It includes the aforementioned multi-language assessment and measures of reading achievement (e.g., letter knowledge and spelling) at Time 1 when all children were in kindergarten and again at Time 2 when they had advanced to either a higher Kindergarten or to Grade 1. An existing measure of expressive language was used to explore concurrent validity of the MLA. The Time 1 sample (n=215) was large enough to examine the technical properties of the tool and the Time 2 sample (n=200) had only 7% attrition so there was sufficient power to describe individual changes. The features of the language assessment suggest that it is reliable and sensitive. The following analysis been conducted: 1) Sample demographics; 2) Distributions by subtasks; 3) Measures of association between subtasks; 4) Item analysis The following research questions are addressed: 1. How does expressive language use evolve for children who use three languages as they progress from kindergarten into first grade? For example, do they shift from using the home language for some items at Time 1 to a language of instruction at Time 2? 2. How does children’s overall expressive language knowledge (as measured in three languages) contribute to their reading achievement (letter sound knowledge and spelling) as measured in two languages over time?

Report of Self-Administered EGRA/EGMA Pilot (Ghana, English)

This report summarizes the findings of an effort to develop and validate tablet-based, self-administered assessments of English-language foundational literacy and numeracy in the early grades. The tools described in the report were developed at the request of Imagine Worldwide with the support of the Jacobs Foundation. RTI carried out field testing and a pilot study to assess the tools' internal consistency, test-retest reliability, and concurrent validity with respect to "traditional" EGRA and EGMA. RTI International developed the two assessments, known respectively as the Self-Administered Early Grade Reading Assessment (SA-EGRA) and the Self-Administered Early Grade Mathematics Assessment (SA-EGMA), with the support and at the direction of Imagine Worldwide. The assessments are deemed “self-administered,” because children complete the assessments independently in response to instructions and stimuli imbedded in the tablet-based software. However, adults typically supervise the organization and conduct of the assessment as well as the collection of individual data from the tablets for analysis. The tools have been developed under an open-source license. The code can be viewed and downloaded for reuse or modification at https://github.com/ICTatRTI/SE-tools/blob/main/README.md. Users of RTI's Tangerine software may request that the SA-EGRA and SA-EGMA tools be added to their Tangerine groups via https://www.tangerinecentral.org/contact

Read Liberia: Institutionalization of the DEMA-GALA

It is essential that the Government of Liberia has the skills and resources to monitor and assess their schools, students, and teachers as part of an evidenced-based education system on the road to self-reliance. Read Liberia maintained a focus on assessment and data use—not only to improve classroom-based instruction and equip school leadership with the knowledge and tools they need to foster quality education, but also to build the capacity of county and district officers in USAID’s six priority counties.

Pages