Harnessing AI Speech Recognition Technology for Educational Reading Assessments amid the COVID-19 Pandemic in the Philippines [CIES 2024 Presentation]

The challenges of conducting educational assessments in low- and middle-income environments during the pandemic can be eased by AI-powered speech recognition technology that offers a promising solution to enhance assessments. By utilizing advanced algorithms and machine learning techniques, this technology accurately transcribes spoken language into written text. Reading fluency and comprehension can be efficiently measured by integrating AI speech recognition into assessments, without the need for physical presence. From the safety of their homes, students can perform the assessments using their smartphones or computers, assisting schools in organizing complex logistics. AI speech recognition technology has a great edge in providing instant feedback, which is one of its main benefits. While students are speaking out loud, the AI system can swiftly assess their intonation, pronunciation, and tempo, rendering quick guidance and identifying areas for refinement. This personalized feedback effortlessly assists students in boosting their reading abilities, even in the absence of in-person teacher interactions. Moreover, AI-backed evaluations can be carried out on a wider scale, enabling educators to collect extensive data on reading patterns and tackle specific issues that are commonly seen among students. The objective of this presentation is to feature the self-administered AI Speech recognition Computer-based reading assessment that RTI developed at the request of the Philippines Department of Education (DepEd), under the USAID All Children Reading (ACR). Throughout the school years of 2020-2022, the COVID-19 pandemic posed significant challenges to conducting face-to-face assessments, particularly in remote learning environments. As a result, teachers faced constraints in terms of time and resources to individually assess learners' reading skills against crucial learning competencies. The proposed automated assessment technology offered a potential solution to alleviate this burden and streamline the evaluation process, allowing educators to efficiently gauge students' reading abilities remotely. In February 2022, ACR-Philippines initiated discussions with USAID and the Philippines Department of Education (DepEd) to produce a ‘proof of concept’ that explores the feasibility of a self-administered computer-based reading assessment (CoBRA) in English and Filipino for students in the Philippines. The concept found resonance with the DepEd leadership as the adoption of a computer-based format for assessments aligns with international practices and provides an excellent opportunity to ascertain students' preparedness to take computer-based tests, such as the Program for International Student Assessment (PISA). The result of this intervention generated a prototype solution piloted and tailored to fit DepED's existing platforms for supporting remote learning and delivery. The pilot provided insights on the feasibility of a computer-based assessment in the context of the Philippines for students in grades 4 -6. The research findings examined the performance, reliability, and results of the AI Speech recognition technology reading assessment, compared to the assessor-administered approach of the assessment. The research generated key design considerations, feedback from end users, recommendations regarding implementing similar approaches, and the future development of similar technology for other languages within and outside the Philippines.

Data-driven decentralized school support: the use of student learning data to direct management support in Tanzania [CIES 2024 Presentation]

On mainland Tanzania, most resource allocation decisions are centralized. The President’s Office - Regional Administration and Local Government (PO-RALG) recruits and assigns teachers, supplies teaching and learning materials and funds capital construction projects. Local Governments are provided limited funds for training support or redeployment of teachers among schools. Their main resource, therefore, is to provide management attention and support to schools. With an average of 140 schools in a District and a staff of 5 individuals, only a few schools can be supported. In 2016, The Ministry of Education and Sports developed a School Quality Assurance Framework to guide local administrators on key areas of focus and guidance for school support. The framework focuses on six areas: school inputs, teacher practice, student learning outcomes, school environment, school leadership and community engagement. To facilitate the monitoring of these areas, USAID Tusome Pamoja project piloted a data collection tool that allowed measurement of progress through indicators. Of particular interest was the use of a group administered learning assessment that established benchmarks for success for grade 2 learners across six sub-tasks for reading, writing and mathematics. Due to limited resources, this assessment was only applied to a sample of schools in each district. Districts could assess their overall performance against these indicators and as a result developed somewhat generic district level support plans.. This presentation will explore how initial challenges of vague district plans were overcome through the critical data collection process leading to the establishment of benchmarks for success. o Under a subsequent activity, USAID Jifunze Uelewe, software was developed that allowed districts to capture group administered learning data for every grade 2 student and to aggregate this information at the school level. Districts were then able to rank order all schools in the district by scores on learning sub-tasks and then select the lowest performing schools for additional management attention. At the same time, districts were able to pair high-performing and low performing schools. The result of the access to school specific data was to allow districts to direct their attention to the development of plans at the school level to address low learning performance, and the ability to track progress of these schools over time. Schools enter data on Government provided tablets and the data can be synched when headteachers have access to Government provided wifi. Decentralized administrators have long been seen as critical for translating national policy into local action. However, they are frequently hampered by a combination of distracted management attention and unclear targets or benchmarks for key inputs, which encourages a laissez-faire status quo. In Tanzania, local governments in four regions have been able to contextualize data to meet their needs and use simple technology to prioritize their attention and decision-making. Our presentation showcases the significance of data driven decision making and continuous improvement of the system. We further highlight the important of simple and meaningful change and fostering proactive decision making at the local level.

Linking EGRA and GALA for Sustainable Benchmarking [CIES Presentation]

Prior Early Grade Reading Assessments (EGRA) have been used to set reading fluency benchmarks in Tanzania for USAID report and for the Government of Tanzania (GoT). Since the EGRA requires one-on-one administration with trained enumerators, tablets, it is currently too expensive to be sustainable within the government system. The Group Administered Literacy Assessment (GALA) is an inexpensive and sustainable way to collect information about students’ reading abilities; is it group administered, does not require intensive training to administer, and is collected on paper, which is then entered into a database. Unfortunately, the GALA does not contain a fluency measure, which is still used as the basis of USAID reporting. The Jifunze Uelewe team created a study in order to identify the reading fluency equivalent benchmarks for the GALA on a subsample of the total GALA respondents. The study is administering the both the EGRA’s reading passage and the GALA to a sample of grade 2 and grade 4 pupils attending public schools in Tanzania. Data collection occurred in October 2021. Data collection was happening during the submission of this abstract, so no results are available for the abstract. But we will report the results and will discuss how well the linking process worked.

Uganda/LARA: Journeys Impact Qualitative Assessment instruments

LARA developed a set of qualitative tools to learn about the successes and challenges related to the implementation of Journeys and to understand what changes staff and pupils had observed since Journeys started in the program schools. The qualitative tools include individual interviews and focus group discussion (FGD) guides with head teachers, teaching and non-teaching staff, change agents and students. There are two individual interviews, one for the teachers and another for the head teachers. The individual interview for teachers investigates the value the Journeys program has brought to the teachers personally, to the school and the classroom, for example changes in the way teachers relate and interact with pupils and changes in disciplinary practices at the school. The individual interview for head teachers on the other hand investigate what has gone well and what the head teachers are struggling with regarding the implementation of Journeys for School Staff and Journeys for Pupils (the Uganda Kids Unite [UKU] Program). There are three FGD guides; (i) FGD guide for teaching and non-teaching staff provides information about the changes (for example interactions among students, teacher attendance, extent of SRGBV) in the school as a result of Journeys, initiatives undertaken by the school to make the school safe and positive and how the initiatives improved the school and/or reduced violence; (ii) FGD guide for head Teachers and school change agents (SCA) that gathers feedback on the successes and challenges associated with the implementation of Journeys program for the school staff and Journeys program for pupils as well as improvements needed to for the continuity of the Journeys program in the schools; and (iii) FGD guide for students that focuses mainly on what pupils enjoyed most about the UKU program and the specific UKU activities they loved. It also asks about what pupils did not enjoy in the UKU meetings, initiatives that UKU teams developed to improve the school, what pupils learned through the UKU program and how the school and classroom have changed since Journeys began.

Uganda/LARA : EGRA and supplementary data collection instruments

The Early Grade Reading Assessment (EGRA) is a diagnostic instrument designed to quickly assess foundational skills for literacy acquisition of pupils in Early Grades of primary school. This diagnostic tool, whose content has been adapted for Uganda and Ugandan local languages, can include a number of subtasks depending on the grade being assessed. EGRA is administered to pupils both in their local language and in English via tablets using a software application (for example Tangerine) designed specifically to collect data on mobile devices. EGRA data can also be gathered manually using paper forms. Each instrument is administered by trained assessors in one-on-one sessions with individual pupils, and requires approximately fifteen minutes. The sub tasks in the English EGRA tools used by LARA include letter sound knowledge, oral passage reading, reading comprehension and vocabulary. LARA administers the English sub-tasks mainly to the P3 and P4 pupils. The subtasks in the local language (Luganda, Runyankore/Rukiga and Runyoro/Rutooro) EGRA tools include orientation to print, letter sound knowledge, segmenting, oral passage reading, reading comprehension, and listening comprehension. Apart from orientation to print that is administered to only P1 pupils, the rest of the local language sub-tasks are administered to P1-P4 pupils. The EGRA tools are accompanied by a pupil stimuli packet used to administer the letter sound knowledge and oral passage reading sub tasks. The subtasks used by LARA were adopted from the EGRA tools developed for the School Health and Reading Program (SHRP). SHRP adapted and vetted the tools to the three languages during a series of weeklong workshops that included researchers, primary school teachers, language board members, Coordinating Center Tutors (CCTs) and MoES staff. The adaptation workshop for Luganda was held in December, 2012, Runyankore/Rukiga tools were adapted in January, 2013 and Runyoro/Rutooro tools were adapted in August 2013. The tools were also piloted in each of the three language areas during the workshops. As part of EGRA data collection, LARA administers supplementary data collection tools to assess pupil context and instructional leadership. The tools also provide important contextual information on the teachers and schools participating in EGRA. These tools include; the pupil context interview, a head teacher interview, teacher interview and school inventory. The following provides a summary of each of the tool. Pupil context interview: Used to gather information on pupils’ preschool attendance, language spoken at home, possessions in the household, and support for reading in the home. Head Teacher Questionnaire: Used to gather information from head teachers regarding their instructional leadership, including their training and education background and their support to the teaching of reading at lower grades. Data from this instrument is used to inform training and targeted corrective actions intended to improve the managerial skills of head teachers and their support to the teaching of reading. Teacher Questionnaire: Used to gather information on the teachers’ education and experience and demographics, support and supervision received, and the availability of teaching materials. The information provides the basis of training and the provision of teaching materials to teachers to help them improve their pedagogical skills. School Inventory Form: Used to gather information on school basic infrastructure (for example water source, latrines and electricity) as well as the presence and use of a school library.

USAID Early Grade Reading (EGR) EGR Final Report

Improving early grade reading and writing outcomes has implications more far-reaching than simply raising scores on national and international assessments. Reading is a fundamental tool for thinking and learning, which has an integrated and cumulative effect on comprehension in all subject areas. Providing students with a strong foundation in reading increases the likelihood of future academic and workforce success. By providing Palestinian teachers with additional strategies and resources to build essential primary students’ reading and writing skills, the US Agency for International Development (USAID) Early Grade Reading (EGR) Project supported the goal of the USAID mission in the West Bank/Gaza of “providing a new generation of Palestinians with quality education and competencies that would enable them to thrive in the global economy and empower them to participate actively in a well-governed society.” Specifically, EGR addressed USAID’s strategic Sub-objective 3.1.5 to improve “service delivery in the education sector through increased access to quality education, especially in marginalized areas of the West Bank; a higher quality of teaching, learning and education management practices; and improved quality and relevancy of the education system at all levels.” EGR also directly supports USAID’s global goal to improve early grade reading skills. In support of the overarching goals, EGR’s project goal was to facilitate change in classroom delivery of early grade reading and writing instruction through three inter-connected component areas including evidence-based standards and curriculum revisions, instructional improvements, and parental engagement activities designed to improve student reading and writing competencies in Kindergarten (KG)–Grade 2 in the West Bank. EGR offered a scalable model of early grade reading instruction in 104 West Bank public schools among 351 teachers who taught 9,679 students. EGR collected data through reviews of curricular and standards’ documents, studies in schools, and assessments of students’ reading competencies. The project developed book leveling criteria to ensure the age- and grade-level appropriateness of reading materials, which facilitated the development or procurement of over 100,000 books for schools. EGR provided the Ministry of Education and Higher Education (MOEHE) with training modules in early grade reading and writing skills, a reading remediation manual, and a school-based professional development model. The project created innovative materials for parents to use to enhance their children’s reading skills. Despite its abbreviated timeframe, the project provided the MOEHE with a wealth of educational data, materials, and resources, including many interventions offered for the first time in the Palestinian educational system.

One Page Brief on Group-Administered Literacy Assessment (GALA)

GALA is an assessment tool for measuring early literacy skills in a group setting. It consists of two main components: student booklets with multiple choice questions and a tool providing assessors with a complete protocol for test administration.

Task Order 27: Egypt Grade 3 Early Grade Reading Assessment (EGRA) Group Assessment Report (English and Arabic Versions)

The purpose of this pilot was two-fold: 1) examining the reliability of the group assessment tool to measure the construct of early grade reading; 2) determining whether or not the group assessment could be used as a partial replacement for the individual EGRA.

School Quality Assessment for Education and WASH in Three UNICEF-Supported Regions

This is the inception report for the School Quality Assessment for Education and WASH in Three UNICEF-Supported Regions of Tanzania. The report outlines the approach to the study including the fundamental research questions and associated design to implementing the study.

Lot Quality Assurance Sampling (LQAS) Pilot Activities in Amhara and Tigray, Ethiopia: Final Report

This report summarizes main findings and lessons learned from the piloting of the lot quality assurance sampling (LQAS) methodology in the education sector in Ethiopia. It also suggests next steps for applying the LQAS methodology more broadly for education program monitoring.

Pages