The best ways to improve evaluations are less technical than people think. We evaluators have an uneasy attitude to this conclusion--our livelihoods depend on maintaining the impression that we have complex technical knowledge beyond the grasp of ordinary folk. I am not going to do anything to undermine that impression, but it’s striking how much simple, common-sense approaches can improve evaluations. I recently attended the American Evaluation Associate (AEA) conference in Washington, D.C. where low-tech solutions were all the rage. I’m not saying we didn’t also talk about Bayesian Quantile Regression Approaches to Longitudinal Program Evaluations, because we did. It’s just that most of the sessions that caught my eye were pretty straightforward and simple to grasp. The fact that the field is still grappling with these common-sense issues is possibly a worrying threat to our perceived expertise.

The low-tech nature of the sessions comes partly from the conference focus on learning. This was reflected in the title of the conference ‘From Learning to Action’ and from the word cloud (above) generated from titles of all the talks at the conference. The conference theme chimes with the aim of RTI's International Education work over the last year. I work in the Monitoring, Evaluation and Research (MER) team in the International Education group at RTI international. We spent the last 12 months reviewing our education programs to understand how to improve learning and action from the data we collect. Many of our conclusions mapped onto the key debates at the conference. Here are a few of my favorites:

Learning from evaluations doesn’t just happen

AEA president, Kathryn Newcomer, opened the conference with the message that learning doesn’t just happen--you need to "make an appointment" to make it happen. The US federal government has adopted ‘learning agendas’ as a tool to ensure a strategic approach to learning. Learning agendas are a more elaborate version of a tool that we have been experimenting with over the past year to ensure that we ask three things about any data collection exercise:

1) What information does the project need? What questions do we need to answer?

2) Can the questions be addressed adequately by the data?

3) What action will be taken based on the answers (that is different from the action you were planning to take anyway)?

It is also important to include learning in implementation plans and calendars. Project staff are busy people and there is always an urgent deadline around the corner. Learning conversations will only take place if M&E and program teams plan them in advance.

Plan for action with an action plan

The experts really helped to demystify this one. “An action plan is a plan for taking action,” they explained. Obvious or not, action plans are a great way to improve programs and practice. They help organize recommendations from monitoring, evaluation and research; making sure recommendations aren’t forgotten, making sure you don’t get overwhelmed by them. They can be used to keep track of the issues identified by research, whether or not those issues can be tackled right now.

Amy Griffin, from the Yale Consultation Center described the essential elements of an action plan: it should be task specific, have a timeline, have an assignment of responsibility and have criteria to evaluate success. Action plans work best when everyone involved in an evaluation is reminded thoughout that the action is the end goal of the work. It can make the evaluation questions more focused and keep people on task. RTI instituted the use of formal action plans as part of the USAID/School Health and Reading Program in Uganda, and now it's becoming part of our standard approach to M&E. 

A checklist manifesto for evaluation

The benefits of a checklist, advocated for notably by Atul Gawande, are clearly being felt in the evaluation field. Checklists ensure we think of all the key things to do in an evaluation--and there is a lot to think about with an evaluation that is technically sound, involves  all the right people in its design and leads to learning. They also ensure we include the right things in our evaluation reports, a point currently championed by USAID. A team at Western Michigan University has compiled a valuable library of evaluation checklists covering all stages of an evaluation from design, management and engagement of stakeholders through to analysis, reporting and taking action.

there is a lot to think about with an evaluation that is technically sound, involves  all the right people in its design and leads to learning.

 

Make sure respondents understand what you are asking them

There can’t be many ideas that are simpler than this one. When you ask a respondent a question, make sure they understand what you are asking them. I suspect most interviewers worth their salt have always done this. But giving this technique a name--“cognitive interviewing[i]” – has allowed a body of knowledge and expertise to coalesce on the subject. Jennifer Ridgeway from the Mayo Clinic led a training on this issue. Interview responses can be off, she said, because respondents fail to understand the question; they fail to retrieve the required information; the information they bring to mind is inaccurate; or they fail to map it correctly onto the response options. Techniques to understand how respondents interpret questions include: asking them to think aloud as they answer the question; or probing them directly, for example, about the meaning of words in the question. We found cognitive interviewing invaluable when developing culturally relevant measures of social and emotional learning in Tanzania. We were asking whether children had 'discipline' and 'confidence' and the local perception of these terms was often subtly different from our own[ii].

I came away from the conference celebrating the serious study of low-tech aspects of evaluations. I counted 69 panels with the word ‘action’ in their title. These aspects of evaluations may be easy to grasp, but they aren’t always easy to get right. In the conference presentations that I usually attend, data-driven action receives attention in a couple of bullet points in the final slide, at most. A careful evaluation design to ensure valid findings is important. But acting on those findings is also a topic worthy of systematic inquiry too. If you think it's just a matter of common sense, you may be surprised to see how much knowledge there is out there about how to do it well. At RTI we have been using this body of knowledge as we continue to ensure that the data we collect lead to better programs and improved education.

To read more from Matthew Jukes, click here and follow him on twitter @matthewchjukes

To read more documents from the MER team, click here.

 

[i] Most likely coined in a 1992 book by Fisher and Geiselman: Memory enhancing techniques for investigative interviewing: The cognitive interview. Springfield, IL: Charles C. Thomas.

[ii] Jukes, Gabrieli, Mgonda, Nsolezi, Jeremiah, Tibenda & Bub (in press). “Respect is an Investment”: Community Perceptions of Social and Emotional Competencies in Early Childhood. Global Education Review.

 

About the Expert

Matthew Jukes's picture
Dr. Matthew Jukes is a Fellow of International Education at RTI International. He has twenty-five years of academic and professional experience in evaluating education projects, particularly in early-grade literacy interventions and the promotion of learning through better health.Dr. Jukes’ research addresses culturally relevant approaches to assessment and programming in social and emotional competencies in Tanzania; improving pedagogy through an understanding of the cultural basis of teacher-child interactions; and frameworks to improve evidence-based decision-making. He is Principal Investigator of the Playful Learning Across the Years (PLAY) measurement project and Research Director of the Play Accelerator research program, both funded by LEGO Foundation. Dr. Jukes is also the Research Director of the Learning at Scale research program. His research also contributes to improving academic and social-emotional learning through RTI’s projects to support the Tanzanian education sector.Previously, Dr. Jukes was an Associate Professor of International Education at the Harvard Graduate School of Education and Senior Director of Global Research, Monitoring and Evaluation at Room to Read. Dr. Jukes has also applied his research to work with the World Bank, UNAIDS, UNICEF, UNESCO, USAID and Save the Children.