Icon Class: 
Presentations

Navigating Aid Alternatives: Government-to-Government funding partnerships in Jordan, Senegal, Nepal [CIES 2023 Panel Presentations]

Since 2010, USAID has increased funding to partner country institutions by 50% [1], and the current administration’s localization agenda suggests that the government-to-government (G2G) modality may be increasingly frequent. Implementing partners operating in countries with a G2G arrangement, must pay careful attention to the meaning of technical assistance and system strengthening for scale and sustainability. This topic was covered by three presentations in a panel session, including: (1) Driving government ownership of a new language policy through a Government-to-Government partnership: The case of Senegal; (2) Government-to-Government programs to improve student learning: The case of Nepal; and (3) The case of Jordan

Using teaching and learning materials in Uzbekistan: Lessons from observations and interviews [CIES 2023 Presentation]

The purpose of this panel presentation is to present the results of two uptake studies to understand how mathematics, Uzbek language arts, ICT, and EFL teachers in Uzbekistan are using and applying newly developed teaching and learning materials in the classroom.

Digital Inclusion and data-driven dropout prevention in Guatemala​ [CIES 2023 Presentation]

Building on past efforts in dropout prevention, the Guatemala Basic Education Quality and Transitions Program seeks to support the Ministry of Education to deploy an open-source mobile application to teachers’ devices that will both facilitate data collection and display easily understandable information on student learning, attendance, and risk of dropout. The application will enable teachers to regularly assess student learning, track daily attendance, and report on key variables that past research has shown to be indicators of risk of dropout, including trauma, economic stress, and other destabilizing conditions. The data will be shared at the school, municipal and ministry level, but only in anonymized and aggregated form, which we expect will reduce teacher fears of punitive accountability. Free community Wi-Fi and school-based internet connectivity will improve teacher’s ability to access the application and other education data, resources and tools while also improving digital inclusion for students to access online learning opportunities and resources. This presentation was delivered by Cynthia del Aguila at the 2023 CIES Annual Conference.

What have we learned about improving learning at the system level? [CIES 2023 Presentation]

This presentation, given at the CIES 2023 Annual Conference, highlights the global shift in measurement and improvement in learning outcomes since the adoption of SDG 4.1.1, under which countries are asked to report on the “the proportion of children and young people…achieving at least a minimum proficiency level in reading and mathematics”. Building on these results, the World Bank and UIS estimate that eighty percent of children in poor countries cannot read a simple sentence by the end of primary school. In reviewing the results of multiple regional and systems-level learning improvement programs to better understand the distribution of learning outcomes and system-level impact we find that: 1) while some progress has been made, the massive changes required to move the needle on the share of children reaching minimum proficiency remains elusive and 2) a small share of schools account for the majority of the gains. Finally, the panelist will create a link between this challenge and the theme of the panel; that education change will only be successful if work is undertaken to better understand and diminish the restraining forces of mindsets and education system social norms.

Analyzing Alignment in Early Grade Reading Curricula, Instruction, and Assessment in Nepal: The Surveys of Enacted Curriculum Approach [CIES 2023 Presentation]

This presentation was given at the Comparative and International Education Society (CIES) annual conference in Washington, DC on February 21, 2023. The presentation describes a systematic research method used in USAID's Early Grade Reading Program II (EGRP II) in Nepal to analyze the alignment between the early grade reading curriculum, teacher instruction, and student assessment. The file includes both the presentation slides and the explanatory notes.

Measuring support for children’s engagement in learning: psychometric properties of the PLAY toolkit [CIES 2023 Presentation]

Despite the growing interest in supporting learning through play across many low and middle-income countries, measures of how contexts can support learning through play are lacking. As part of the Playful Learning Across the Years (PLAY) project, the concept of “self-sustaining engagement” was identified as central to learning through play. That is, learning through play is effective because children are deeply engaged in their learning and are self-motivated to learn. The PLAY toolkit was designed to measure how settings – particularly adult-child interactions in those settings – support children’s self-sustaining engagement in learning. The toolkit was developed for use in multiple age-groups across different settings. For the 0-2 age-group, the toolkit assessed support for children’s engagement in the home, largely through interactions between the caregiver and child. These interactions were assessed through observations and through an interview with the caregiver. In the 3-5 age-group, tools were developed to measure support for engagement in the home and the classroom. Tools for the 6-12 age-group were focused only on the classroom. The classroom-based tools had several components. Teacher-child interactions were assessed through observations, a teacher survey and – for the 6-12 age-group only – a student survey. There was also a classroom inventory to assess physical aspects of the classroom – such as materials on the walls – which might support self-sustaining engagement in learning. The toolkit was developed in three phases – the Build phase used qualitative data to understand local concepts of self-sustaining engagement. The Adapt phase used cognitive interviewing and small-scale (approx. 25 schools, centers or homes) quantitative data to refine the tools. In the Test Phase we used large-scale (approx. 150 schools, centres or homes) quantitative data to assess the psychometric properties of the tool. This presentation focuses on these psychometric analyses. Data were collected for the 6-12 age group from Kenya, Ghana and Colombia; for the 3-5 age group from Colombia, Jordan and Ghana; and from Colombia only for the 0-2 age group. Results indicate how the concept of “support for self-sustaining engagement” can be divided into constituent sub-scales and how the different methods of assessment – observation, teacher report and student report – relate to one another. We will discuss plans to develop a final toolkit, based on these analyses, which can help strengthen the evidence base on learning through play.

Multi-Language Assessment (MLA) for young children: A screener to understand language assets [CIES 2023 Presentation]

The lack of information about children’s oral language skills limits our understanding of why some children do not respond to literacy instruction. Even though native language oral language skills are not strong predictors of native language decoding (Durgunoglu et al., 1993; Lesaux et al., 2006), oral language skills have been shown to have a small role in non-native word reading for non-native speakers (Geva, 2006; Quiroga et al., 2002). Yet, understanding that threshold of language skills is not understood. Cross-linguistic studies show that some literacy skills transfer between languages (Abu-Rabia & Siegel, 2002; Bialystok, McBride-Chang, & Luk, 2005; Cisero & Royer, 1995; Comeau, Cormier, Grandmaison, & Lacroix, 1999; Denton et al., 2000; Durgunoglu, 2002; Durgunoglu et al., 1993; Genesee & Geva, 2006; Gottardo, Yan, Siegel, & Wade-Woolley, 2001; Koda, 2007; Wang et al., 2006). This includes letter knowledge, print concepts, and language skills (phonological awareness and vocabulary). The transfer of these skills is considered a resource (Genesee, Geva, Dressler, & Kamil, 2006) that assists reading in the additional languages. Children learning to read in a non-native language bring their first language (i.e., mother tongue) to the instructional setting. Yet, its use will depend on the teacher’s use translanguaging between the language of instruction and children’s home language (s). The presence of two or more languages contributes to children having domains of knowledge in specific languages. For example, domains of knowledge children learn at school such as shapes, might only be known in the language of instruction. Relatedly, domains of knowledge they learn at home from family interactions, such as cooking, might only be known in the mother-tongue. And domains of knowledge that children learn on the playground, are likely to be learned in a lingua franca, or a common language to the area. Even though mixing languages is common, most language assessments do not capture this knowledge. Even in samples with multi-lingual students, for reasons of reliability and consistency, most language assessments assess children in just one language and describe results for that language. The results are used to help to explain results on reading assessments. But measuring language skills in just one language overlooks the concepts that a multilingual child may have in other languages and describe them from a deficit approach opposed to the asset of being multi-lingual. To address this problem, we developed a tool, the Multi-Language Assessment (MLA), to measure children’s expressive language across multiple languages to understand the skills they have to support their learning. The tool is intended to be reliable, valid, and child friendly. It is our hypothesis that children’s expressive language scores across multiple languages can help to explain their success or struggles in the early years of formal schooling. We conceptualized and developed the Multi-Language Assessment (MLA) to capture children’s language skills across multiple languages in a 7-minute interaction with a trained assessor. The MLA measures expressive language of 36 concepts shown in 36 images that children would be exposed to through family and community interactions, conversations, media, books, or school. The items included in the assessment yield variable distribution; they are not intended to be items that would yield ceiling effects. A child’s utterance is coded to one of nine categories of varying weights. Furthermore, the items are intended to have levels of a familiarity. For example, for an image of a coconut tree, some children might call it that, while other children describe it by its domain, a tree, not identifying the specific type. Both utterances would earn a child points but of varying weight. Research Problem 1. Many children in low- and middle-income contexts do not learn to read in lower primary efficiently. 2. Some people hypothesize that the reason for children’s poor performance is language related. 3. When children’s language skills are assessed it is usually in one language and describes their abilities as deficits as opposed to considering their assets of being multi-lingual. 4. Assessing language requires time and young children’s attention spans are short, reducing data quality if the assessment is too long. 5. The MLA was created to understand expressive language use across multiple languages and to be brief Study: The paper presents results from a recent longitudinal study that collected child level results at two time points in government school in rural Kenya. It includes the aforementioned multi-language assessment and measures of reading achievement (e.g., letter knowledge and spelling) at Time 1 when all children were in kindergarten and again at Time 2 when they had advanced to either a higher Kindergarten or to Grade 1. An existing measure of expressive language was used to explore concurrent validity of the MLA. The Time 1 sample (n=215) was large enough to examine the technical properties of the tool and the Time 2 sample (n=200) had only 7% attrition so there was sufficient power to describe individual changes. The features of the language assessment suggest that it is reliable and sensitive. The following analysis been conducted: 1) Sample demographics; 2) Distributions by subtasks; 3) Measures of association between subtasks; 4) Item analysis The following research questions are addressed: 1. How does expressive language use evolve for children who use three languages as they progress from kindergarten into first grade? For example, do they shift from using the home language for some items at Time 1 to a language of instruction at Time 2? 2. How does children’s overall expressive language knowledge (as measured in three languages) contribute to their reading achievement (letter sound knowledge and spelling) as measured in two languages over time?

Building an Assessment of Community Defined Social-Emotional Competencies from the Ground Up - A Tanzanian Example

Most of the research that informs our understanding of children’s social-emotional learning (SEL) comes from Western, Educated, Industrialized, Rich, Democratic (WEIRD) societies where behavior is guided by a view of the self as autonomous, acting on individual preferences. In the subsistence agricultural communities (home to more than a quarter of the world’s population), obligations and communal goals override personal preferences and individuals see themselves as part of a social hierarchy. These contrasting models of the self have profound implications for SEL. Many studies underestimate these implications because they use assessment tools developed in WEIRD settings to understand SEL in lower- and middle-income countries. The aim of our study was to build an SEL assessment from the ground up, based on community definitions of valued competencies in southern Tanzania. In Study 1, Qualitative data from parents and teachers indicated that dimensions of social responsibility, such as obedience and respect, were valued highly. Teachers valued curiosity and self-direction more than parents, as competencies required for success in school. Quantitative assessments in Study 2 found that individuals more exposed to sociodemographic variables associated with WEIRD settings (urban residence and higher parental education and SES) were more curious, less obedient and had poorer emotional regulation. Overall findings suggest that the conceptualization of social-emotional competencies may differ between and within societies; commonly held assumptions of universality are not supported. Based on the findings of this study we propose a systematic approach to cultural adaptation of assessments. The approach does not rely solely on local participants to vet and adapt items but is instead guided by a rigorous cultural analysis. Such an analysis, we argue, requires us to put aside assumptions about behavioral development and to consider culture as a system with an origin and function. Such an approach has the potential to identify domains of SEL that are absent from commonly used frameworks and to uncover other domains that are conceptualised differently across contexts. In so doing, we can create SEL assessments and SEL programs that are genuinely relevant to the needs of participants.

Strengthening MTB-MLE Policy and Capacity in Mother Tongue Supplementary Reading Materials Provisioning in the Philippines

Describes the development of mother tongue supplementary materials to support the implementation of the MTB-MLE approach to language education.

A Monitoring, Evaluation, and Learning (MEL) Framework for Technology-Supported Remote Trainings [CIES Presentation]

Existing research on the uptake of technologies for adult learning in the global South is often focused on the use of technology to reinforce in-person learning activities and too often involves an oversimplified “with or without” comparison (Gaible and Burns 2005, Slade et al. 2018). This MEL Framework for Technology-Supported Remote Training (MEL-Tech Framework) features a more nuanced perspective by introducing questions and indicators that look at whether the technology-supported training was designed based on a solid theory of learning; whether the technology was piloted; whether there was time allocated to fix bugs and improve functionality and user design; how much time was spent using the technology; and whether in-built features of the technology provided user feedback and metrics for evaluation. The framework presents minimum standards for the evaluation of technology-supported remote training, which, in turn, facilitates the development of an actionable evidence base for replication and scale-up. Rather than “just another theoretical framework” developed from a purely academic angle, or a framework stemming from a one-off training effort, this framework is based on guiding questions and proposed indicators that have been carefully investigated, tested, and used in five RTI monitoring and research efforts across the global South: Kyrgyz Republic, Liberia, Malawi, the Philippines, and Uganda (Pouezevara et al. 2021). Furthermore, the framework has been reviewed for clarity, practicality, and relevance by RTI experts on teacher professional development, policy systems and governance, MEL, and information and communications technology, and by several RTI project teams across Africa and Asia. RTI drew on several conceptual frameworks and theories of adult learning in the design of this framework. First, the underpinning theory of change for teacher learning was informed by the theory of planned behavior (Ajzen 1991), Guskey’s (2002) perspective on teacher change, and Clarke and Hollingsworth’s (2002) interconnected model of professional growth. Second, Kirkpatrick’s (2021) model for training evaluation helped determine many of the categories and domains of evaluation. However, this framework not only has guiding questions and indicators helpful for evaluating one-off training events focusing on participants’ reactions, learning, behavior, and results (as is the focus in Kirkpatrick’s model) but also includes guiding questions and indicators reflective of a “fit for purpose” investigation stage, a user needs assessment and testing stage, and long-term sustainability. Furthermore, this framework’s guiding questions and indicators consider participants’ attitudes and self-efficacy (based on the research underpinning the theory of planned behavior), as well as aspects of participants’ post-training, ongoing application and experimentation, and feedback (Clarke and Hollingsworth; Darling-Hammond et al. 2017; Guskey). Lastly, the framework integrates instructional design considerations regarding content, interaction, and participant feedback that are uniquely afforded by technology.

Pages