Country Code: 
PHL

Education system strengthening across Asia: a systematic review of USAID activities and critical discussion [CIES 2023 Panel Presentation]

The purpose of this formal group panel presentation is to hold an in-depth discussion on USAID’s investments into system strengthening across Asia over the past decade and how these efforts are situated within the broader global move to focus more intentionally and coherently on education system strengthening. The panel will discuss a 2022 empirical research study (the USAID System Strengthening Review, hereafter “the Review”) conducted by two international research organizations for the USAID Asia Bureau which reviews USAID system strengthening work in 11 Asian countries. This Review offers a qualitative evidence-based analysis relevant to the field of comparative and international education (CIE) and analyzes new data collected from a desk review of relevant project documents, reports, and evaluations, key informant interviews, multi-stakeholder survey, and three deep-dive case studies in Nepal, Cambodia, and the Philippines. The group panel will include three presentations on different aspects of the Review and include discussant commentary and critique to elicit group and audience discussion. The first panel presentation discusses a theoretical framework drawn from the RISE Programme (Pritchett 2015 and Spivak 2021) and recent analysis from the Brookings Institution’s Center for Universal Education. The Review’s central research questions are guided by these broader global trends, as well as its own analysis framework developed specifically for this study, discussed in Presentation 3. Conclusions are drawn based on this framework, and the overall discussion in Presentations 2 and 3 considers the context of USAID programming in Asia and how new knowledge provides new insights.

Adapting the Early Grade Reading Assessment (EGRA) for Students Who are Deaf in the Philippines [CIES 2023 Panel Presentations]

The purpose of this formal group panel session is to share and discuss the experience in the Philippines of adapting the Early Grade Reading Assessment (EGRA) for students who are deaf and hard-of-hearing. This was conducted in two phases. First, under the USAID-funded Gabay project the traditional EGRA was adapted and piloted for students who are deaf and hard-of-hearing. The resulting instrument was then used to conduct a baseline assessment in March 2020. Subsequently, in 2022 USAID tasked the All Children Reading Asia (ACR-Asia) project to develop a prototype of the assessment which could be administered remotely to students who are deaf and hard-of-hearing when it is not possible to send trained assessors to physically conduct the assessment in person. This panel will discuss the challenges, successes, and lessons learned through the process and provide recommendations on how other countries or projects could build upon the experience in the Philippines. In 2022, the USAID-Philippines Mission tasked the ACR-Asia project (2016-2023; implemented by RTI International) to develop a prototype EGRA instrument that can be conducted remotely for students who are deaf and hard-of-hearing. The school closures and travel restrictions imposed under the COVID-19 pandemic created major challenges in reaching all students, including students with disabilities. In addition, the context of the Philippines which experiences frequent adverse weather and geological situations – like typhoons, flooding, volcanic eruptions, and earthquakes – coupled with a geography of thousands of islands and hard to reach areas, makes it difficult to conduct on-site and in-person activities in general. Therefore, a remotely administered assessment would help to address these challenges in reaching students, especially those with disabilities, for assessment and support. This panel will address the following: • Addressing the needs of the Philippines for assessing and reaching students who are deaf and hard-of-hearing • Considerations in adapting the traditional EGRA for students who are deaf and hard-of-hearing • The process of adapting and piloting an EGRA for students who are deaf and hard-of-hearing • Prototyping a remote version of the adapted instrument, including the technological and procedural challenges to address • Lessons learned and recommendations for similar adaptations

Remote EGRA for Learners Who Are Deaf or Hard-of-Hearing

Early Grade Reading Assessments (EGRAs) measure students’ progress in reading through individual administration of an oral survey of foundational reading skills. Administration is generally conducted on-site by teams of trainer assessors, face to face with students in a one-on-one capacity. While EGRAs are administered internationally, students who are deaf or hard of hearing are often left at a disadvantage by prevailing reading assessments. To adapt EGRAs to fit the needs of students who are deaf or hard of hearing, USAID has supported the development of EGRAs specifically for students who are deaf or hard of hearing in Kenya, Morocco, Nepal, and the Philippines, among other countries. In the Philippines, these assessments have improved the understanding of and capability in inclusive education programming, including the development and pilot implementation of the Filipino Sign Language (FSL) curriculum and training and mentoring of teachers in FSL. As there is no information on existing models of remotely administered EGRAs, the purpose of this activity was to prototype—design, develop, and test for proof of concept and acceptability—an early grade reading assessment that is administered asynchronously with assessors and enumerators who are not on-site, for students who are deaf or hard of hearing. Such a model can be deployed in outbreaks and emergencies that affect the ability to administer EGRAs in person and at a specified period and specifically adapted for students who are deaf or hard of hearing.

Influences on teachers’ use of the prescribed language of instruction: Evidence from four language groups in the Philippines.

In 2009 the Philippines introduced a mother tongue-based multilingual education language policy requiring the “mother tongue” as the language of instruction (LOI) in kindergarten through grade 3. Using teacher classroom language data collected from four LOI groups in 2019, we compared the frequency of teachers’ use of the target LOI in different contexts, including urban versus rural classrooms, classrooms with relatively homogeneous student language backgrounds versus more heterogeneous classrooms, and classrooms with materials in the target language versus classrooms without. We also examined language usage against characteristics of the teacher populations, including language background, years of experience, training, and beliefs about the best language for initial literacy. The results strongly suggest that the most influential levers for increasing teacher usage of a designated LOI in these contexts are ensuring that teachers are assigned to schools where the LOI matches their own first language and providing teaching and learning materials in the target LOI, especially teacher’s guides. These two factors were more strongly and more consistently correlated with teacher use of the LOI than all other variables examined. The linguistic homogeneity of the student population also showed a statistically significant though lower impact on teacher language usage. This document was developed with support from the American people through the United States Agency for International Development.

Co-designing Prototypes for Future Learning Spaces: A Field Guide for Scaling Future Learning Spaces Innovation in the Philippines

The purpose of this field guide is to introduce concepts, tools, and group activities that can be used to guide educators in co-creating locally defined prototypes of future learning spaces that will not only enhance social, emotional, and academic learning for all Filipino learners, but will ensure that learners flourish and develop a sense of agency, proactive citizenship, and work readiness for a successful future. The guide was created from selected content, exercises, and group processes that were introduced in the Leaders in Futures of Education (LIFE) course (June 20–July 19, 2022) and the Prototyping Future Learning Spaces Workshop (August 15–19 2022), which were attended by DepEd central office representatives, representatives from three regional offices (i.e., Region III: Central Luzan; Region VI: Central Visayas; and the Cordillera Administrative Region), and prototyping teams consisting of representatives from five SDOs—Tanauan City, Tuguegarao City, Pasig City, Caloocan City, and Quezon City—and at least one cooperating school in each SDO. This field guide provides a framework for DepEd partnerships across the country to begin their prototyping journey for co-designing future learning spaces for Filipino students.

Computer-based Reading Assessment Pilot Report

In February 2022, ACR–Philippines initiated conversations with USAID and the Philippines Department of Education (DepEd) on developing a prototype technology to enable automated assessment and scoring of learners’ oral reading fluency, listening, and reading comprehension skills. The idea resonated with DepEd leadership for several reasons. During the school years of 2020-2022, the COVID-19 Pandemic made face-to-face assessments challenging, particularly in remote learning settings. Teachers were stretched in time and resources to assess learners one-on-one their reading skills against the most essential learning competencies. Further, other international assessments like PISA use a computerbased format, and this will be an opportunity to understand how well-prepared students are to take computer-based tests. In response, ACR-Philippines sought to produce a ‘proof of concept’ that explores the feasibility of a self-administered computer-based reading assessment (CoBRA) in English and Filipino for students in the Philippines. The technology would incorporate voicerecognition software to enable students to read directly into their device. The software would automate the score of the students’ reading scores through an artificial-intelligence (AI) algorithm designed to calculate words-per-minute (wpm) and reading accuracy rate. The platform will produce reports providing students, parents, and/or teachers immediate feedback on their performance. This is a report of that pilot experience.

Philippines Remote Learning Study Report

In June 2020 the Philippines Department of Education (DepEd) adopted the Basic Education Learning Continuity Plan (BE-LCP), a framework to guide the 2020–2021 school year in light of school closures that started in March 2020, during the final weeks of the 2019–2020 school year. The plan introduced an adjusted and condensed curriculum, the Most Essential Learning Competencies, to support schools and teachers in delivering learning through alternative modalities in lieu of face to face classes. DepEd also modified the 2020–2021 school calendar to start October 5, 2020, and end in June 2021. The school year typically runs from June through March in the Philippines, but regions, divisions, and schools needed additional time to prepare and operationalize the BE-LCP. For example, regions were tasked with determining appropriate remote learning1 delivery modalities based on local context. Approaches were further adapted and defined at the individual school level as schools contextualized the learning continuity plan. Given DepEd’s decentralized approach to contextualizing and ensuring learning continuity for learners, it became clear that remote learning would look vastly different across regions, divisions, and within schools. Subsequently, this mixed-methods study was designed to take an in-depth look at schools and families across the country to understand their experiences with teaching and learning during school closures—and particularly to understand how early language and literacy learning can best be supported in the distance learning context.

Formative Evaluation of the DepEd Commons

DepEd ICTS requested support from the USAID/All Children Reading Philippines activity for evaluating the recently-launched DepEd Commons online resource portal to ensure that it could live up to its potential as a key strategy for giving all Filipino learners access to educational resources. In response to this request, RTI in collaboration with the DepEd ICTS EdTech Unit, designed a process for reviewing the DepEd Commons to identify evidence-based recommendations for a fully sustainable and impactful DepEd Commons. Although we use the term ‘evaluation’, this is a formative exercise, recognizing that the platform is in a nascent stage and would benefit most from forward-looking and actionable recommendations for living up to its potential, not a backward-looking judgment of implementation quality or completeness at a certain point in time. In line with the approach stated above—that the evaluation of the DepEd Commons would be a formative exercise designed to provide actionable recommendations for improvement, given the early stage of development of the platform—we chose a SWOT analysis methodology to frame the exercise. SWOT analyses are conducted to identify strengths, weaknesses, opportunities, and threats. The study team carried out key informant interviews with stakeholders from the implementing department in DepEd and in other departments with similar systems; a quantitative analysis of platform statistics containing usage data; and a self-administered survey focusing on usage and usability of the platform.

Strengthening MTB-MLE Policy and Capacity in Mother Tongue Supplementary Reading Materials Provisioning in the Philippines

Describes the development of mother tongue supplementary materials to support the implementation of the MTB-MLE approach to language education.

A Monitoring, Evaluation, and Learning (MEL) Framework for Technology-Supported Remote Trainings [CIES Presentation]

Existing research on the uptake of technologies for adult learning in the global South is often focused on the use of technology to reinforce in-person learning activities and too often involves an oversimplified “with or without” comparison (Gaible and Burns 2005, Slade et al. 2018). This MEL Framework for Technology-Supported Remote Training (MEL-Tech Framework) features a more nuanced perspective by introducing questions and indicators that look at whether the technology-supported training was designed based on a solid theory of learning; whether the technology was piloted; whether there was time allocated to fix bugs and improve functionality and user design; how much time was spent using the technology; and whether in-built features of the technology provided user feedback and metrics for evaluation. The framework presents minimum standards for the evaluation of technology-supported remote training, which, in turn, facilitates the development of an actionable evidence base for replication and scale-up. Rather than “just another theoretical framework” developed from a purely academic angle, or a framework stemming from a one-off training effort, this framework is based on guiding questions and proposed indicators that have been carefully investigated, tested, and used in five RTI monitoring and research efforts across the global South: Kyrgyz Republic, Liberia, Malawi, the Philippines, and Uganda (Pouezevara et al. 2021). Furthermore, the framework has been reviewed for clarity, practicality, and relevance by RTI experts on teacher professional development, policy systems and governance, MEL, and information and communications technology, and by several RTI project teams across Africa and Asia. RTI drew on several conceptual frameworks and theories of adult learning in the design of this framework. First, the underpinning theory of change for teacher learning was informed by the theory of planned behavior (Ajzen 1991), Guskey’s (2002) perspective on teacher change, and Clarke and Hollingsworth’s (2002) interconnected model of professional growth. Second, Kirkpatrick’s (2021) model for training evaluation helped determine many of the categories and domains of evaluation. However, this framework not only has guiding questions and indicators helpful for evaluating one-off training events focusing on participants’ reactions, learning, behavior, and results (as is the focus in Kirkpatrick’s model) but also includes guiding questions and indicators reflective of a “fit for purpose” investigation stage, a user needs assessment and testing stage, and long-term sustainability. Furthermore, this framework’s guiding questions and indicators consider participants’ attitudes and self-efficacy (based on the research underpinning the theory of planned behavior), as well as aspects of participants’ post-training, ongoing application and experimentation, and feedback (Clarke and Hollingsworth; Darling-Hammond et al. 2017; Guskey). Lastly, the framework integrates instructional design considerations regarding content, interaction, and participant feedback that are uniquely afforded by technology.

Pages