Практическое руководство и обзор литературы

*** This is the Russian translation of the Science of Teaching Structured Pedagogy Guide Literature Review *** Результаты обучения в странах с низким и средним уровнем дохода являются катастрофически низкими. Задача улучшения результатов в области базовой грамотности и счета (FLN) зависит от повышения качества преподавания и поддержки принятия методических решений отдельными учителями - их десятки тысяч во многих странах. Программы структурированной педагогики показали их способность оказывать поддержку учителям в принятии таких индивидуальных педагогических решений в широком масштабе, и, что такие изменения могут оказать значительное влияние на результаты обучения.

Разработка объема и последовательности учебных программ по обучению грамоте и счету

В данном руководстве рассматриваются шаги, связанные с изучением существующей национальной учебной программы, а также разработкой ее объема и последовательности. Данный основополагающий процесс происходит до того, как будут написаны учебные пособия по обучению грамоте и счету, с тем чтобы содержание было адаптировано к конкретным условиям, отражало стандарты на страновом уровне и соответствовало потребностям развития.

Report of Self-Administered EGRA (Malawi, Chichewa)

This report summarizes the findings of an effort to develop and validate tablet-based, self-administered assessments of Chichewa-language foundational literacy and numeracy in the early grades in Malawi. RTI International developed the two assessments, known respectively as the Self-Administered Early Grade Reading Assessment (SA-EGRA) and the Self-Administered Early Grade Mathematics Assessment (SA-EGMA), with the support and at the direction of Imagine Worldwide. The assessments are deemed “self-administered,” because children complete the assessments independently in response to instructions and stimuli embedded in the tablet-based software. However, adults typically supervise the organization and conduct of the assessment as well as the collection of individual data from the tablets for analysis.

Measuring support for children’s engagement in learning: psychometric properties of the PLAY toolkit [CIES 2023 Presentation]

Despite the growing interest in supporting learning through play across many low and middle-income countries, measures of how contexts can support learning through play are lacking. As part of the Playful Learning Across the Years (PLAY) project, the concept of “self-sustaining engagement” was identified as central to learning through play. That is, learning through play is effective because children are deeply engaged in their learning and are self-motivated to learn. The PLAY toolkit was designed to measure how settings – particularly adult-child interactions in those settings – support children’s self-sustaining engagement in learning. The toolkit was developed for use in multiple age-groups across different settings. For the 0-2 age-group, the toolkit assessed support for children’s engagement in the home, largely through interactions between the caregiver and child. These interactions were assessed through observations and through an interview with the caregiver. In the 3-5 age-group, tools were developed to measure support for engagement in the home and the classroom. Tools for the 6-12 age-group were focused only on the classroom. The classroom-based tools had several components. Teacher-child interactions were assessed through observations, a teacher survey and – for the 6-12 age-group only – a student survey. There was also a classroom inventory to assess physical aspects of the classroom – such as materials on the walls – which might support self-sustaining engagement in learning. The toolkit was developed in three phases – the Build phase used qualitative data to understand local concepts of self-sustaining engagement. The Adapt phase used cognitive interviewing and small-scale (approx. 25 schools, centers or homes) quantitative data to refine the tools. In the Test Phase we used large-scale (approx. 150 schools, centres or homes) quantitative data to assess the psychometric properties of the tool. This presentation focuses on these psychometric analyses. Data were collected for the 6-12 age group from Kenya, Ghana and Colombia; for the 3-5 age group from Colombia, Jordan and Ghana; and from Colombia only for the 0-2 age group. Results indicate how the concept of “support for self-sustaining engagement” can be divided into constituent sub-scales and how the different methods of assessment – observation, teacher report and student report – relate to one another. We will discuss plans to develop a final toolkit, based on these analyses, which can help strengthen the evidence base on learning through play.

Multi-Language Assessment (MLA) for young children: A screener to understand language assets [CIES 2023 Presentation]

The lack of information about children’s oral language skills limits our understanding of why some children do not respond to literacy instruction. Even though native language oral language skills are not strong predictors of native language decoding (Durgunoglu et al., 1993; Lesaux et al., 2006), oral language skills have been shown to have a small role in non-native word reading for non-native speakers (Geva, 2006; Quiroga et al., 2002). Yet, understanding that threshold of language skills is not understood. Cross-linguistic studies show that some literacy skills transfer between languages (Abu-Rabia & Siegel, 2002; Bialystok, McBride-Chang, & Luk, 2005; Cisero & Royer, 1995; Comeau, Cormier, Grandmaison, & Lacroix, 1999; Denton et al., 2000; Durgunoglu, 2002; Durgunoglu et al., 1993; Genesee & Geva, 2006; Gottardo, Yan, Siegel, & Wade-Woolley, 2001; Koda, 2007; Wang et al., 2006). This includes letter knowledge, print concepts, and language skills (phonological awareness and vocabulary). The transfer of these skills is considered a resource (Genesee, Geva, Dressler, & Kamil, 2006) that assists reading in the additional languages. Children learning to read in a non-native language bring their first language (i.e., mother tongue) to the instructional setting. Yet, its use will depend on the teacher’s use translanguaging between the language of instruction and children’s home language (s). The presence of two or more languages contributes to children having domains of knowledge in specific languages. For example, domains of knowledge children learn at school such as shapes, might only be known in the language of instruction. Relatedly, domains of knowledge they learn at home from family interactions, such as cooking, might only be known in the mother-tongue. And domains of knowledge that children learn on the playground, are likely to be learned in a lingua franca, or a common language to the area. Even though mixing languages is common, most language assessments do not capture this knowledge. Even in samples with multi-lingual students, for reasons of reliability and consistency, most language assessments assess children in just one language and describe results for that language. The results are used to help to explain results on reading assessments. But measuring language skills in just one language overlooks the concepts that a multilingual child may have in other languages and describe them from a deficit approach opposed to the asset of being multi-lingual. To address this problem, we developed a tool, the Multi-Language Assessment (MLA), to measure children’s expressive language across multiple languages to understand the skills they have to support their learning. The tool is intended to be reliable, valid, and child friendly. It is our hypothesis that children’s expressive language scores across multiple languages can help to explain their success or struggles in the early years of formal schooling. We conceptualized and developed the Multi-Language Assessment (MLA) to capture children’s language skills across multiple languages in a 7-minute interaction with a trained assessor. The MLA measures expressive language of 36 concepts shown in 36 images that children would be exposed to through family and community interactions, conversations, media, books, or school. The items included in the assessment yield variable distribution; they are not intended to be items that would yield ceiling effects. A child’s utterance is coded to one of nine categories of varying weights. Furthermore, the items are intended to have levels of a familiarity. For example, for an image of a coconut tree, some children might call it that, while other children describe it by its domain, a tree, not identifying the specific type. Both utterances would earn a child points but of varying weight. Research Problem 1. Many children in low- and middle-income contexts do not learn to read in lower primary efficiently. 2. Some people hypothesize that the reason for children’s poor performance is language related. 3. When children’s language skills are assessed it is usually in one language and describes their abilities as deficits as opposed to considering their assets of being multi-lingual. 4. Assessing language requires time and young children’s attention spans are short, reducing data quality if the assessment is too long. 5. The MLA was created to understand expressive language use across multiple languages and to be brief Study: The paper presents results from a recent longitudinal study that collected child level results at two time points in government school in rural Kenya. It includes the aforementioned multi-language assessment and measures of reading achievement (e.g., letter knowledge and spelling) at Time 1 when all children were in kindergarten and again at Time 2 when they had advanced to either a higher Kindergarten or to Grade 1. An existing measure of expressive language was used to explore concurrent validity of the MLA. The Time 1 sample (n=215) was large enough to examine the technical properties of the tool and the Time 2 sample (n=200) had only 7% attrition so there was sufficient power to describe individual changes. The features of the language assessment suggest that it is reliable and sensitive. The following analysis been conducted: 1) Sample demographics; 2) Distributions by subtasks; 3) Measures of association between subtasks; 4) Item analysis The following research questions are addressed: 1. How does expressive language use evolve for children who use three languages as they progress from kindergarten into first grade? For example, do they shift from using the home language for some items at Time 1 to a language of instruction at Time 2? 2. How does children’s overall expressive language knowledge (as measured in three languages) contribute to their reading achievement (letter sound knowledge and spelling) as measured in two languages over time?

Report of Self-Administered EGRA/EGMA Pilot (Ghana, English)

This report summarizes the findings of an effort to develop and validate tablet-based, self-administered assessments of English-language foundational literacy and numeracy in the early grades. The tools described in the report were developed at the request of Imagine Worldwide with the support of the Jacobs Foundation. RTI carried out field testing and a pilot study to assess the tools' internal consistency, test-retest reliability, and concurrent validity with respect to "traditional" EGRA and EGMA. RTI International developed the two assessments, known respectively as the Self-Administered Early Grade Reading Assessment (SA-EGRA) and the Self-Administered Early Grade Mathematics Assessment (SA-EGMA), with the support and at the direction of Imagine Worldwide. The assessments are deemed “self-administered,” because children complete the assessments independently in response to instructions and stimuli imbedded in the tablet-based software. However, adults typically supervise the organization and conduct of the assessment as well as the collection of individual data from the tablets for analysis. The tools have been developed under an open-source license. The code can be viewed and downloaded for reuse or modification at https://github.com/ICTatRTI/SE-tools/blob/main/README.md. Users of RTI's Tangerine software may request that the SA-EGRA and SA-EGMA tools be added to their Tangerine groups via https://www.tangerinecentral.org/contact

PLAY overview CIES (Dubeck et al., 2022)

Play has the potential to transform the global learning crisis. In infancy and early childhood, play builds a strong foundation for later learning by improving brain development and growth (Goldstein, 2012). In education systems that lack capacity to support children effectively, play brings its own powerful engine to drive learning—the joyful, engaged intrinsic motivation of children themselves (Zosh et al., 2017). In this way, play contributes to the holistic development of children, helping to prepare them for the challenges of the current and future world. Accordingly, there is an urgent need to improve measurement of playful learning, to be able to add to the evidence base on what the benefits of play are, how playful learning takes place, and how it can be promoted at home and at school across the lifespan. This presentation focuses on a renewed conceptualization of playful learning and describes an innovative approach to measuring how settings contribute to playful learning for children ages 0 to 12, supported by the Lego Foundation. The settings we examine include homes, classrooms and ECD centers. Following Tseng and Seideman (2007), we view settings as consisting of social interactions (i.e. between teachers or caregivers and children) and the organization of resources (e.g. learning materials, games). First, we will present our conceptual framework which identifies six constructs to guide our measurement strategy. The constructs, such as ‘support for exploration’, represent the ways in which a setting supports playful learning. Next, we will present our contextualization framework which guides how we are adapting and modifying the measurement tools to different contexts. The tool consists of a protocol to observe adult-child interactions and survey measures conducted with teachers, caregivers and primary school pupils. As part of the development process for these measurement tools, observation and survey measures will go through a three-phase development process in Kenya, Ghana, Colombia, and Jordan. The Build phase involved collecting qualitative data from teachers, caregivers and students to understand their perception of playful learning and how it is supported at home and at school. Next, an Adapt phase took place where the initial versions of the measurement tools underwent cognitive interviewing, field adaptation, and a small pilot to adjust and extend the items in the tool. The third Test phase is a full pilot of the instruments, and the data will undergo rigorous psychometric analyses to review the validity and reliability of the tools in the four country contexts. We will use the results to adjust the instruments and to finalize the conceptual framework and contextualization strategies. The final toolkit will be publicly available towards the end of 2022 with supporting materials for contextualization, piloting, training and analysis. The toolkit will be available on a public platform designed to promote sharing of data collected using the tool and to collaborate to continually improve approaches to measuring support for playful learning.

Instructional Support for Effective Large-Scale Reading Interventions (Learning at Scale)

Learning outcomes are low and instruction is poor in many low- and middle-income countries (LMICs). These shortcomings are particularly concerning given the substantial learning loss due to COVID-19 from which many systems are suffering. The Learning at Scale study identified eight of the most effective large-scale education programs in LMICs and now is examining what factors contribute to successful improvements in learning outcomes at scale (see list of programs on last page of this brief). These programs were selected based on their demonstrated gains in reading outcomes at-scale, from either midline or endline impact evaluations. The study addresses three overarching research questions, focused on understanding (1) the components of instructional practices (Brief 1), (2) instructional supports (Brief 2), and (3) system supports (Brief 3) that lead to effective instruction. This brief focuses specifically on instructional supports. It addresses the following research question: What methods of training and support lead to teachers adopting effective classroom practices in successful, large-scale literacy programs?

Instructional Practices for Effective Large-Scale Reading Interventions (Learning at Scale)

The Learning at Scale study aimed to investigate factors contributing to successful improvements in learning outcomes at scale in eight of the most effective large-scale education programs in LMICs (see the map of programs on the last page of this brief). These programs were selected based on their demonstrated gains in reading outcomes at-scale, from either midline or endline impact evaluations. The study addressed three overarching research questions, focused on understanding the components of instructional practices (Brief 1), instructional supports (Brief 2), and system supports (Brief 3) that lead to effective instruction. This brief focuses specifically on instructional practices. It addresses the following research question: What classroom ingredients (e.g., teaching practices, classroom environment) lead to learning in programs that are effective at scale?

Teacher Language and Literacy Assessment: Final Report

The Research for Effective Education Programming – Africa (REEP–A) Task Order, awarded in September 2016, is a five-year project within the United States Agency for International Development (USAID) Africa Bureau. The primary objective of REEP–A is to generate and effectively disseminate Africa regional and country-specific education data, analysis, and research to inform the prioritization of needs and education investment decisions. One research focus under REEP–A is to explore how teachers’ language proficiency and literacy in the language of instruction (LOI) influence students’ learning outcomes. It is hypothesized that the teachers’ level of language proficiency and literacy in the LOI can either facilitate student learning, if high; or impede learning, if low. However, limited data are available on how teacher language and literacy skill levels precisely relate to student outcomes. Exploring this relationship requires having a valid and reliable tool to measure teachers’ language and literacy skills. USAID therefore commissioned the development of the Teacher Language and Literacy Assessment (TLLA) to assess teachers’ language proficiency and literacy in the required LOI. The TLLA, adaptable to any language, consists of subtasks assessing speaking, listening, reading, and writing, as well as vocabulary and grammar, in the language(s) used for teaching and learning at the primary school level in a given context. It is envisioned that policymakers, researchers, and other education stakeholders can use the TLLA to collect data on teachers’ linguistic assets and gaps in the languages that their role requires them to use. These data could be useful for identifying factors contributing to student learning outcomes, informing teacher training and professional development needs, designing teacher deployment policies, and evaluating the impact of interventions aimed at improving teachers’ or students’ language and literacy skills. The aim of this report is to present the new tool and disseminate the initial findings around its technical adequacy. The international community has directed considerable effort to assessing and understanding the impact of language on students’ literacy and language skills, and the TLLA is a complementary tool that shows promise for understanding teachers’ language assets and needs.

Pages