This blog was written by Luis Crouch (RTI) and Silvia Montoya (UIS) and originally appeared on the UNESCO Institute for Statistics Data for Sustainable Development blog. Reposted with permission.
The recent symposium Innovations in Global Learning Metrics: A Focused Debate among Users, Producers and Researchers, hosted by Arizona State University’s Center for Advanced Studies in Global Education (CASGE), brought together a wide range of stakeholders to discuss how to more effectively use global learning metrics for education policymaking. Some of the most interesting discussions focused on the options to achieve globally-comparable reporting on Sustainable Development Goal 4 (SDG 4) based on a paper prepared by the UNESCO Institute for Statistics.
The paper generated excellent discussion during the symposium, as well as pre-symposium commentary from Kadriye Ercikan (University of British Columbia), Tünde Kovács Cerović (Belgrade University and Open Society Foundations), Radhika Gorur (Deakin University) and William H. Schmidt (Michigan State University). The paper and written comments are available on the Arizona State University website. But the discussion need not end with the symposium – and in fact, it shouldn’t. In that spirit, we would like to share some additional insights on the need for comparability and standardisation in measurement, the importance of measurement in achieving on-the-ground improvements, and whether (and how) the academic and research community can influence policy setting.
First, the UN (and other) institutions who are custodians of the measurement at this point have a mandate. There is not much choice but to follow that mandate. The UN system is comprised of member states which ultimately decide on the work programme of the organizations. Specific agencies have been given the mandate for the measurement and tracking of performance for a set of fixed SDG 4 indicators, and in a manner that is as standardised and comparable as reasonably feasible. Within the Report of the Inter-Agency and Expert Group on Sustainable Development Goal Indicators, the language is very specific: “Global monitoring should be based, to the greatest possible extent, on comparable and standardised national data, obtained through well-established reporting mechanisms from countries to the international statistical system”.
As was wisely noted at the meeting, though, it is true that, while this is a policy or even political mandate now given to the professionals, professionals (and academics) shape the agenda and provide policymakers and political leaders with a sense of what is possible. Professionals cannot entirely “hide” behind a mandate. But we honestly think that, had the policymakers truly been responsive to a technocratic agenda instead of having opinions of their own, the indicators would not be nearly as demanding on us as they are. We are being forced to stretch, especially in areas such as adult learning, civic engagement and sustainability, and digital skills. We are not sure the public and NGO researchers and officials necessarily wished this difficult challenge on themselves.
But more importantly, and as was also wisely noted during the symposium, professionals ought to have the moral courage to engage with their mandate, not just “obey”.
...professionals ought to have the moral courage to engage with their mandate, not just “obey”.
To us, one of the most important reasons to have comparability and standardisation has nothing to do with efficiency, cost savings or accountability but with equity and social justice, taking as a point of departure the contents and skills to which children and youth are entitled. If we did not have the standardised and comparable measurements that we already have, which allow us to talk based on a common language and understanding, for instance, we would not know some of the things we increasingly know, in a comparable and multi-country (that is, pretty generalisable) manner, such as:
- About one-half of the global cognitive inequality is between countries and one-half is within countries, at least insofar as this can be measured using assessments. Knowing this should be helpful to both governments and development agencies in setting allocative priorities.
- We have a much clearer sense of what it takes to reduce learning inequalities within countries – less so between them.
- We increasingly know that factors such as wealth and ethnicity/ethno-linguistic discrimination or marginalisation count for more in driving cognitive inequality than gender or (less clearly so) the “pure” urban-rural divide.
- We also increasingly know that, because much inequality (both between and within countries) cannot be explained by any clear “ascriptive” factors (gender, parental wealth, ethnicity), “simple” (but not really so simple!) lack of management capacity and quality assurance is a real problem. And data can help, not just in setting policy but in managing and “moving the needle” on that policy.
You cannot know how much inequality exists, or what drives it, unless you measure it – with a standardised measurement stick, otherwise it is literally difficult to judge that two things are not of equal length. But, we also note that the ideal might be “as much localisation as possible, as much standardisation as necessary.” That is why the UIS’ emphasis has been on supporting the comparability of existing (and future national) assessments rather than on backing, adopting, “imposing” or even endorsing specific global assessments.
Second, it was noted that measurement isn’t really the issue – action by teachers and systems is. This is true, and we would certainly back the idea that there be more funding of the “improvement” function than the “measurement” function. However, improvement can more easily gain traction if one knows what is going on. (There is, of course, already far more backing of the “regular business” aspects of education systems: assessment would be a tiny fraction of that cost. However, there is under-investment in how one actually uses assessments – the right combination of assessments – to improve.) But there is still a measurement mandate aimed at making the problem visible so resources for improvement are dedicated and, since there are efficiencies in specialisation, institutions such as the UIS (and their equivalents at WHO, FAO, etc.) have to focus on measurement. But perhaps such specialised bodies ought to reach out more and support others, whose mission is to use the data to support teachers (or doctors and nurses, agricultural extension agents, etc.). Along those lines, though, we also suggested (with tongue only partially in cheek) that perhaps international assessments ought to be less, not more, relevant or at least less determinant. That is, they ought to be only a reference point (albeit a useful one), and national assessments ought to have center stage. This is the UIS’ position.
A last major issue that was discussed, partly in reaction to the paper but partly also because it was “in the air”, was whether (and how and why) policy research and academic input influence policy. Some were skeptical or pessimistic. Others not as much. In our view, there is impact. Not, perhaps, immediately. And few, if any, policymakers make decisions solely based on evidence. Nor is the impact of research typically traceable to particular academics, books, papers or conferences – it is a much more diffuse process, which can contribute to the sensation that one is not having impact. And, of course, political economy and just plain politics have a lot of influence. But J.M. Keynes got it about right: “Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back… Not, indeed, immediately, but after a certain interval… soon or late, it is ideas, not vested interests, which are dangerous for good or evil.”
...few, if any, policymakers make decisions solely based on evidence.
We can cite a few optimistic examples or suggest ways to think about the impact of research:
- While politics and political economy play a role, no one likes to say so publicly. Policymakers seldom say, “Oh, that was a purely political decision.” They often pay lip service to rationality, data, evidence, as well as, at least in democracies or semi-democracies, common sense round what is right and just. Academics and researchers can take advantage of this tendency to pay lip service and demand to be heard. In a similar manner, human rights get announced before they get enacted, and they get enacted partly because they were announced and someone then used that in order to push. As noted, it is not immediate, traceable to particular individuals, etc.
- A good example is the case for girls’ education and the progress that was made over the last 40 years or so. Researchers were instrumental in this. There was not necessarily a political case. Nor was there that much grassroots pressure from villagers or even urban dwellers. On the contrary, our experience suggests that with regard to these issues the grassroots were pretty feudal or patriarchal. Researchers and social activists, both global and local, eventually had an impact.
- It also helps if researchers are sensitive to issues and if they gain the initial trust of policymakers by helping with smaller, relatively short-term and relatively less weighty matters, as a way of gaining the space to have impact on the more serious issues. This can happen with individual researchers and think tanks, institutions, universities and centers, such as CASGE. Admittedly, this is a long game, but social development does not happen overnight.
- “Situation rooms” that show data and modeling in visually-striking ways can be helpful under certain circumstances. Policymakers often react against what they see as too much naiveté on the part of researchers, when they signal that they expect policymakers to act right away on the evidence presented. But in our experience, showing the impact of simulations in real time, in a policy discussion (e.g. projecting even a simple, Excel-based model on the wall) can be useful. This varies by bureaucratic culture, of course. And it is more useful if one can take the “situation room” (again, just a simple projection of a simulation model can be useful) to the policymakers rather than having the policymakers come to the “situation room”—unless they happen to be nearby.
- Finally, it is also important to take on board the fact that it is usually local intellectuals and activists who will carry the day. UN bodies, as was noted, can’t really “make” governments take action based on data/evidence. But the data can support local intellectuals and activists who can pressure governments, e.g., in eliminating school fees, in increasing investment in the younger children, etc.
There is no time, resources and energies for questioning the commitments themselves. The 2030 Agenda is a call for everybody. Academia is not the exception, and initiatives such as the GPE’s KIX are stressing the relevance of knowledge exchange and areas where academia can play a critical role if focused on building human capacities at all levels.
Read more by Luis Crouch here.
Read more analysis of measurement and the SDGs here, especially: