A Monitoring, Evaluation, and Learning (MEL) Framework for Technology-Supported Remote Trainings [CIES Presentation]
Existing research on the uptake of technologies for adult learning in the global South is often focused on the use of technology to reinforce in-person learning activities and too often involves an oversimplified “with or without” comparison (Gaible and Burns 2005, Slade et al. 2018). This MEL Framework for Technology-Supported Remote Training (MEL-Tech Framework) features a more nuanced perspective by introducing questions and indicators that look at whether the technology-supported training was designed based on a solid theory of learning; whether the technology was piloted; whether there was time allocated to fix bugs and improve functionality and user design; how much time was spent using the technology; and whether in-built features of the technology provided user feedback and metrics for evaluation.
The framework presents minimum standards for the evaluation of technology-supported remote training, which, in turn, facilitates the development of an actionable evidence base for replication and scale-up. Rather than “just another theoretical framework” developed from a purely academic angle, or a framework stemming from a one-off training effort, this framework is based on guiding questions and proposed indicators that have been carefully investigated, tested, and used in five RTI monitoring and research efforts across the global South: Kyrgyz Republic, Liberia, Malawi, the Philippines, and Uganda (Pouezevara et al. 2021). Furthermore, the framework has been reviewed for clarity, practicality, and relevance by RTI experts on teacher professional development, policy systems and governance, MEL, and information and communications technology, and by several RTI project teams across Africa and Asia.
RTI drew on several conceptual frameworks and theories of adult learning in the design of this framework. First, the underpinning theory of change for teacher learning was informed by the theory of planned behavior (Ajzen 1991), Guskey’s (2002) perspective on teacher change, and Clarke and Hollingsworth’s (2002) interconnected model of professional growth. Second, Kirkpatrick’s (2021) model for training evaluation helped determine many of the categories and domains of evaluation. However, this framework not only has guiding questions and indicators helpful for evaluating one-off training events focusing on participants’ reactions, learning, behavior, and results (as is the focus in Kirkpatrick’s model) but also includes guiding questions and indicators reflective of a “fit for purpose” investigation stage, a user needs assessment and testing stage, and long-term sustainability. Furthermore, this framework’s guiding questions and indicators consider participants’ attitudes and self-efficacy (based on the research underpinning the theory of planned behavior), as well as aspects of participants’ post-training, ongoing application and experimentation, and feedback (Clarke and Hollingsworth; Darling-Hammond et al. 2017; Guskey). Lastly, the framework integrates instructional design considerations regarding content, interaction, and participant feedback that are uniquely afforded by technology.