What We Are Learning About Learning Networks [CIES 2024 Presentation]

The USAID Leading Through Learning Global Platform (LTLGP) and USAID Improving Learning Outcomes for Asia (ILOA) presented a panel at the 2024 CIES Conference on what each project has been learning about establishing and implementing learning networks. Presentations from three USAID learning networks (HELN, GRN, ECCN) and one regional hub managed by LTLGP along with a presentation from ILOA discuss how each learning network utilizes collaboration, learning, and adapting (CLA) to assess how well their networks are reaching and meeting the needs of their members and how they have adapted and adjusted their networks based on CLA fedback.

Data-driven decentralized school support: the use of student learning data to direct management support in Tanzania [CIES 2024 Presentation]

On mainland Tanzania, most resource allocation decisions are centralized. The President’s Office - Regional Administration and Local Government (PO-RALG) recruits and assigns teachers, supplies teaching and learning materials and funds capital construction projects. Local Governments are provided limited funds for training support or redeployment of teachers among schools. Their main resource, therefore, is to provide management attention and support to schools. With an average of 140 schools in a District and a staff of 5 individuals, only a few schools can be supported. In 2016, The Ministry of Education and Sports developed a School Quality Assurance Framework to guide local administrators on key areas of focus and guidance for school support. The framework focuses on six areas: school inputs, teacher practice, student learning outcomes, school environment, school leadership and community engagement. To facilitate the monitoring of these areas, USAID Tusome Pamoja project piloted a data collection tool that allowed measurement of progress through indicators. Of particular interest was the use of a group administered learning assessment that established benchmarks for success for grade 2 learners across six sub-tasks for reading, writing and mathematics. Due to limited resources, this assessment was only applied to a sample of schools in each district. Districts could assess their overall performance against these indicators and as a result developed somewhat generic district level support plans.. This presentation will explore how initial challenges of vague district plans were overcome through the critical data collection process leading to the establishment of benchmarks for success. o Under a subsequent activity, USAID Jifunze Uelewe, software was developed that allowed districts to capture group administered learning data for every grade 2 student and to aggregate this information at the school level. Districts were then able to rank order all schools in the district by scores on learning sub-tasks and then select the lowest performing schools for additional management attention. At the same time, districts were able to pair high-performing and low performing schools. The result of the access to school specific data was to allow districts to direct their attention to the development of plans at the school level to address low learning performance, and the ability to track progress of these schools over time. Schools enter data on Government provided tablets and the data can be synched when headteachers have access to Government provided wifi. Decentralized administrators have long been seen as critical for translating national policy into local action. However, they are frequently hampered by a combination of distracted management attention and unclear targets or benchmarks for key inputs, which encourages a laissez-faire status quo. In Tanzania, local governments in four regions have been able to contextualize data to meet their needs and use simple technology to prioritize their attention and decision-making. Our presentation showcases the significance of data driven decision making and continuous improvement of the system. We further highlight the important of simple and meaningful change and fostering proactive decision making at the local level.

Why Benchmarks Matter

Setting benchmarks or standards that clearly state expectations about how well students or teachers should be performing on various skills at each stage of education and then collecting data to monitor these skills enables governments to see how individuals and education systems overall are progressing toward the goal of improved learning. This information allows governments to identify where improvements are needed and to chart progress year-to-year and against global standards. So how did Read Liberia do it? Read this brief to find out more!

Linking EGRA and GALA for Sustainable Benchmarking [CIES Presentation]

Prior Early Grade Reading Assessments (EGRA) have been used to set reading fluency benchmarks in Tanzania for USAID report and for the Government of Tanzania (GoT). Since the EGRA requires one-on-one administration with trained enumerators, tablets, it is currently too expensive to be sustainable within the government system. The Group Administered Literacy Assessment (GALA) is an inexpensive and sustainable way to collect information about students’ reading abilities; is it group administered, does not require intensive training to administer, and is collected on paper, which is then entered into a database. Unfortunately, the GALA does not contain a fluency measure, which is still used as the basis of USAID reporting. The Jifunze Uelewe team created a study in order to identify the reading fluency equivalent benchmarks for the GALA on a subsample of the total GALA respondents. The study is administering the both the EGRA’s reading passage and the GALA to a sample of grade 2 and grade 4 pupils attending public schools in Tanzania. Data collection occurred in October 2021. Data collection was happening during the submission of this abstract, so no results are available for the abstract. But we will report the results and will discuss how well the linking process worked.

How data informs the journey: History and the next steps of Early Grade Reading

On January 18, 2022, the USAID’s Bureau for Asia collaborated with RTI International to reflect on the journey of early grade reading around the globe. The first presenter, Rosalina J. Villaneza, gave an introduction of national-scale early grade literacy assessments in the Philippines. The second presenter, Pilar Robledo, discussed the advent of USAID early grade reading programs, using the EGR Barometer to explore the impact of these programs. The final presenter, Luis Crouch, reflected on research and experience of early grade reading programs, suggesting the next steps on this journey to improve early grade literacy worldwide. View the recording below.

Using the EGR Barometer to support benchmark and target setting for reading outcomes (CIES 2019 Presentation)

The Barometer offers a dynamic tool for interactive use of data on early grade reading outcomes. The ability to look at how the data are impacted by different parameters, like the level at which a reading benchmark is set, allows users to consider what benchmark and target for students achieving that benchmark in the near term may or may not be realistic. Furthermore, when such data are available, the Barometer also allows users to review the impact of interventions that have contributed to improving reading outcomes, and then factor in whether they can expect those kinds of improvements in the future, given the different investments and initiatives underway in their countries. The presentation is a short demonstration of these two features of the Barometer – target setting and considering the impact of previous or current interventions. Presented at CIES 2019.

Setting Reading Benchmarks - Evidence from India [CIES 2019 Presentation]

This presentation is based on an activity that was designed to apply lessons learned and best practices from the recent EGRA Benchmarks and Standards Research Report (RTI International, 2018) to a five-language benchmarking activity for early grade reading in India.

USAID Early Grade Reading (EGR) EGR Final Report

Improving early grade reading and writing outcomes has implications more far-reaching than simply raising scores on national and international assessments. Reading is a fundamental tool for thinking and learning, which has an integrated and cumulative effect on comprehension in all subject areas. Providing students with a strong foundation in reading increases the likelihood of future academic and workforce success. By providing Palestinian teachers with additional strategies and resources to build essential primary students’ reading and writing skills, the US Agency for International Development (USAID) Early Grade Reading (EGR) Project supported the goal of the USAID mission in the West Bank/Gaza of “providing a new generation of Palestinians with quality education and competencies that would enable them to thrive in the global economy and empower them to participate actively in a well-governed society.” Specifically, EGR addressed USAID’s strategic Sub-objective 3.1.5 to improve “service delivery in the education sector through increased access to quality education, especially in marginalized areas of the West Bank; a higher quality of teaching, learning and education management practices; and improved quality and relevancy of the education system at all levels.” EGR also directly supports USAID’s global goal to improve early grade reading skills. In support of the overarching goals, EGR’s project goal was to facilitate change in classroom delivery of early grade reading and writing instruction through three inter-connected component areas including evidence-based standards and curriculum revisions, instructional improvements, and parental engagement activities designed to improve student reading and writing competencies in Kindergarten (KG)–Grade 2 in the West Bank. EGR offered a scalable model of early grade reading instruction in 104 West Bank public schools among 351 teachers who taught 9,679 students. EGR collected data through reviews of curricular and standards’ documents, studies in schools, and assessments of students’ reading competencies. The project developed book leveling criteria to ensure the age- and grade-level appropriateness of reading materials, which facilitated the development or procurement of over 100,000 books for schools. EGR provided the Ministry of Education and Higher Education (MOEHE) with training modules in early grade reading and writing skills, a reading remediation manual, and a school-based professional development model. The project created innovative materials for parents to use to enhance their children’s reading skills. Despite its abbreviated timeframe, the project provided the MOEHE with a wealth of educational data, materials, and resources, including many interventions offered for the first time in the Palestinian educational system.

Benchmarks for Early Grade Reading Skills in West Bank Policy Brief

The Ministry of Education and Higher Education (MOEHE) conducted the first EGRA assessment in the West Bank in March 2014 among a nationally representative sample of Grade 2 students, followed by a benchmarking exercise in September 2014. In 2018, the Early Grade Reading (EGR) project conducted a project baseline using an adapted EGRA 1 and MELQO 2 3 . Following this assessment, the MOEHE expressed interest in revising the 2014 provisional Grade 2 benchmarks and developing Grade 1 benchmarks. EGR conducted a technical benchmarking workshop in November 2018.

Worldwide Inequality and Poverty in Cognitive Results: Cross-sectional Evidence and Time-based Trends

The Sustainable Development Goals (SDGs) for education represent a major departure from the Millennium Development Goals (MDGs) - at least if educational leaders act seriously in their pursuit - in at least two important respects. First, the goals now pertain to learning outcomes. Second, there is a great deal of focus on inequality in the SDGs. Taking note of this new dual emphasis of the SDGs, this paper assembles the largest database of learning outcomes inequality data that we know of, and explores key issues related to the measurement of inequality in learning outcomes, with a view to helping countries and international agencies come to grips with the key dimensions and features of this inequality. Two issues in particular are explored. First, whether, as countries improve their average cognitive performance (as measured by international learning assessments) from the lowest to middling levels, they typically reduce cognitive skill inequality or, more importantly perhaps, whether they reduce absolute lack of skills. Second, whether most of cognitive skills inequality is between or within countries. In dealing with these measurement issues, the paper also explores the degree to which measures of cognitive skills are “proper” cardinal variables lending themselves to generalizations from the field of income and wealth distribution—the field for which many measures of inequality and its decomposition were first applied. To do this, we look into whether using the item response theory (IRT) test scores of programmes such as TIMSS influence these types of findings, relative to the use of the underlying and more intuitive classical test scores. Patterns emerging from the classical scores are far less conclusive than those of the IRT scores, in part due to the greater ability of the IRT scores to discriminate between pupils at the bottom end of the performance spectrum. An important contribution of the paper is to examine the sensitivity of standard measures of inequality to different sets of test scores. The sensitivity is high, and the conclusion is that meaningful comparisons between test score inequality and, for instance, income inequality are not possible, at least not using the currently available toolbox of inequality statistics. Finally, the paper explores the practical use of school-level statistics from the test data to inform strategies for reducing inequalities.

Pages