Clara Fontdevila
Producing Global Learning Metrics
Today we explore the production of global learning metrics inside the UNESCO Institute for Statistics. My guest is Clara Fontdevila, a British Academy Postdoctoral Fellow at the School of Education at the University of Glasgow.
Clara’s newest article is entitled “The politics of good enough data. Developments, dilemmas, and deadlocks in the production of global learning metrics,” which was published in the International Journal of Educational Development.
Today’s episode was recorded in front of a live audience at the School of Education at the University of Glasgow. Thanks to Matthew Thomas for organizing the event.
Citation: Fontdevila, Clara, interview with Will Brehm, FreshEd, 318, podcast audio, April 24, 2023.https://freshedpodcast.com/fontdevila/
Will Brehm 3:20
Clara Fontdevila, welcome to FreshEd.
Clara Fontdevila 3:23
Thank you very much.
Will Brehm 3:31
So, we are here at the University of Glasgow actually recording this conversation in person. [Cheers from crowd]. A very enthusiastic crowd is here to listen to your research today. Here we are, Clara, to talk about some of your research. So, I want to start actually with sort of unpacking the UNESCO Institute for Statistics, UIS for short. What is it?
Clara Fontdevila 3:56
Well, it’s a part, as the name says, of UNESCO, that is in charge and is responsible for the production and harmonization of statistics in education and the other areas in which UNESCO works. It is quite an autonomous institute, and it’s a relatively young organization or institute, because originally, statistics work in UNESCO happened within UNESCO. There was a division of statistics taking care of this. But for different reasons, after a period in which it was clearly the most trustworthy, the most reliable source of educational statistics… for different reasons, its reputation during the 80’s started to be eroded or overshadowed by the emergence of other organizations that were doing some work on statistics. So, it was a bit of a crisis moment. And in order to fix this or regain, or reinvigorate, the reputation of UNESCO’s statistical services, there was this willingness to create a new institute, a semi-autonomous institute more or less, with full or considerable managerial, political autonomy, in terms of hiring and so on… in order to polish again or revitalize the reputation of UNESCO’s statistics. And to prove it was truly an autonomous organization and so on, it was even moved to Montreal in Canada. It’s not based in Paris, the UIS is in Canada to, let’s say, give proof of this independence and so on. And it has… done the trick.
Will Brehm 5:16
So, this semi-autonomous agency of UNESCO that deals with statistics, do they only focus on education? Or are they doing statistics across everything that UNESCO works on?
Clara Fontdevila 5:26
No, they’re responsible for many other areas. But it’s true that education takes a lot of work in relative terms, and the visibility that education statistics have is particularly prominent. So, it’s true that in practice, everything related to education has a big organizational priority for UIS, because they are judged on the quality of education statistics in particular, I would say.
Will Brehm 5:48
Okay, right. So, education statistics become sort of key to what they’re trying to do.
Clara Fontdevila 5:53
Yeah. It’s high stakes for them in terms of reputation. It’s very visible, especially with the advent of the SDGs, and SDG4 in particular. The ability of UIS to deliver, to produce, all the new indicators that were created became a bit of a high-stakes exercise for them because they had to prove that they have the technical capacity to do this…
Will Brehm 6:14
So, maybe we should dig into Sustainable Development Goal #4, because in 2015, when this was promulgated – the SDGs SDG 4, it was sort of the first time that the global goal wasn’t simply about access. It was also about, quote, unquote, quality. But of course, what does quality mean? And how might one go about measuring quality? It seems to be something that UIS would be particularly interested in trying to figure out-
Clara Fontdevila 6:39
Yeah. With the advent of the SDGs and SDG 4 there was all this discourse… And it’s true that for the first time it was more than access, more than enrollment, more than completion. And there was this attention to quality. Some call it the “learning turn”, or the “quality turn”, and so on. This, to be honest, sometimes is a bit of an exaggeration. Other goals, like the Education for All agenda, they had an important quality component. There has been a preoccupation over learning for years. But it’s true that it crystalized very clearly in the SDG 4. This enrollment-plus-learning, or access-plus learning agenda, is crystalized with SDG 4. And this, for UIS, was a good opportunity, but it was a big challenge- for different reasons… but the thing is that for other areas of education related to access, basically, and other indicators, UIS has a pretty well-established relationship with the national ministries of education and national institute of statistics. So, the process more or less runs smoothly. So, they get the data and so on. It’s a very routine thing. But when it comes to learning data, it is a bit more complicated because basically they need to rely on other data suppliers that are not necessarily national education ministries. I’m talking here basically about cross-national assessments. PISA but also TIMSS and PIRLS. So, the assessment consortium in charge of this IEA, or the OECD, but also regional consortia like the Laboratorio, or PASEC or SACMEQ…, now the UIS has to establish a relationship with them in order to be able to report on learning data. And it was a challenge because there was not this kind of relationship before.
Will Brehm 8:11
Right. UIS as an institution had to build relationships, not only with ministries of education, and getting their data that was normally reported to UNESCO, but also find all of these sorts of international organizations collecting cross-national data, build relationships, get the data, but then obviously, these different organizations are measuring different things, right. So, you have to then put it all together sounds like a technical nightmare.
Clara Fontdevila 8:36
The fact is that there had been attempts in the past to do something like this; to start working on some form of globally comparable learning data. There have been attempts to do this, at least since the late 2000’s. Ohere was the Observatory for Learning Outcomes… This kind of attempts to harmonize existing cross-national assessments, and so on. But it had proven difficult precisely because building this kind of trusting relationship with countries and with cross-national assessments was not easy. With the adoption of the SDG 4, this became an inescapable responsibility, basically, so they needed to find a way. And yeah, of course it was a technical nightmare, because these cross-national assessments measure similar things but not exactly the same and not in the same way. They apply to different grades or different ages. So, they partially overlap but not in a perfect way. So, it is complicated. And another layer of complexity is that, to be sure, cross-national assessments are not the only source of learning data, but you also have national assessments. Harmonizing national assessment is even more complicated. But also, you have to make a decision: are we prioritizing national assessments or cross-national assessments, or none of them? So, making all these decisions, yes, is a technical nightmare, but it’s also a political nightmare in many ways, because you’re prioritizing different ways of proceeding, you are creating new responsibilities or mandates for countries, you’re giving much more visibility to certain cross-national assessments, maybe… This kind of stuff. So, it was a technical and political nightmare. It was not easy, it is not easy at all.
Will Brehm 9:58
How does UIS go about navigating some of those complexities, some of those politics between these big organizations? Like I would imagine, PISA, for instance, or the IEA want their assessment to be sort of expanded and picked up because they’re getting something in return, they have an interest in doing that. But UIS might not only want to work with them, right? So, how does UIS navigate some of those sorts of tricky politics of global learning metrics?
Clara Fontdevila 10:22
I think that one big effort in this area has been creating a sort of infrastructure in which everyone is brought to the table. So, there have been different efforts in that regard. The most visible and successful one has been GAML, the Global Alliance to Monitor Learning, for learning measurement. And basically, this is a space in which cross-national assessments are invited, international organizations are invited, there are representatives from the civil society, there are country representatives, of course… And there has been this effort to make sure this is a democratic and open space, a plural one. It has not been perfect, and it has been criticized because it’s not as democratic, as open, as inclusive as some would want. But there has been this effort to make sure there is an open conversation. It is transparent with the expectation that this will facilitate some form of consensus building, even if it’s a slow process. But I think that there was this willingness to make sure that this doesn’t feel like an obscure exercise, but it is open to the public, it is transparent. Lots of documentation is produced tracking every decision GAML makes, and so on. So, I think that this has been a big part of this attempt. Of course, when you bring everyone to the table, what happens is that people have different priorities, different options. And of course, there have been different options that have been given consideration over the last five, six years. First of all, for instance, there was an idea to create a new test for all the countries like… ex novo, something completely new. This was discarded because it was felt as not particularly realistic, riddled with implementation difficulties. So, this was discarded. Then there were different proposals, and especially this big assessment consortia or international organizations in charge of regional or international assessments, proposed basically to rely primarily on cross-national assessments and build some form of comparability to make sure that the data was perfectly comparable. This was another option, but of course the coverage of this is limited. Not every country participates in a regional or international assessment. And secondly, it was a bit problematic from the perspective of country ownership, because in a way it was like incentivizing countries to prioritize participation in cross-national assessments, rather than the articulation and construction of their own national assessments. So, prioritizing cross-national assessments became problematic in its own terms. And I think that basically what the UIS has eventually advocated for, is for a sort of hybrid or flexible approach in which they maximize the number of data sources that can be used to report on learning metrics.
Will Brehm 12:44
So, cross-national, national, anything. Anything can be sort of brought together?
Clara Fontdevila 12:49
Exactly. Well, with some standards, of course, and there is a whole regulation about this. But yeah, maximizing data flexibility. And this has been a way of combining this technical rigor but also country ownership, capacity-building.. making sure that counties are not forced to go through a certain route, and that this is not, let’s say, trumping national priorities.
Will Brehm 13:09
UIS seems to have taken this sort of broad approach, everything is acceptable. And maybe that’s to get everyone on their side, not to sort of make anyone too upset with the approach that they’re going to take?
Clara Fontdevila 13:20
Yeah. It’s a way of maximizing coverage and country ownership and not alienating any constituency. Because of course, at some point, this was a bit of a contested or fraught arena. So, there was this deliberate attempt to avoid this to have a paralyzing effect. Because different organizations want different things. That was pretty clear at some point. And for instance, cross-national assessments claimed that using national assessments to create globally comparable learning data was not technically rigorous enough. And they considered that in-built comparability or perfect comparability was non-negotiable. Other organizations and some governments claimed that it was important to make sure that national assessments could be used to report on global indicators, again, to maximize country ownership, to make sure that countries are building their own assessment systems, and not solely relying on cross-national assessments. So, it was a way of avoiding this to become an obstacle or an impasse. And this attempt to create something hybrid, flexible enough. And this is how the idea of “fit for purpose data” emerged. So, this attempt at pragmatism, rather than allowing for perfect be the enemy of good.. .try to open a bit the possibilities.
Will Brehm 14:29
It’s quite fascinating. So, how does UIS then go about trying to harmonize these data sources? I understand why they took this approach to not alienate anybody, to maintain country ownership. Politically it seems right and a good way to go, but also creates this huge technical problem then, of…you get all of this data coming in, and how on earth do you harmonize it to then say anything about SDG 4?
Clara Fontdevila 14:55
Exactly. So, this has been like the new challenge. I would say that the first challenge was to decide on which data suppliers, and this was solved through this kind of hybrid approach. But then of course, when you accept that, okay, many things can be used, you have a new challenge, which is called sometimes the linking debate. Because you have to decide how you harmonize this data. Because they had to be chartered or mapped against a common scale. But this means basically deciding… okay, getting an A in this test means getting what in that other test? It means linking assessments. It was not easy, it was complicated. And again, different organizations were proposing different approaches to this. I will not go into the technicalities of this but again, the big tradeoff was: should we prioritize technical rigor or prioritize a flexible approach that allows countries to rely on their own thing and so on? And again, the final… I would say debate was settled in favor of this more flexible approach, at least for the interim, for the immediate time. Basically, they have been exploring different approaches rather than prioritizing a course of action, they continue to explore different ways of linking existing assessments, in order, again, to not alienate any constituency and see really what works better for everyone. So there has been this attempt to navigate…
Will Brehm 16:05
It’s so fascinating to understand some of those negotiations. It’s almost like diplomacy at this international level with so many different sorts of groups of people with different stakes. I want to go back to SDG 4 because I guess something that is a bit unclear to me is what are they even trying to measure? Like they’re trying to get all this learning data and create an indicator, but an indicator for what?
Clara Fontdevila 16:28
The question on what was quality, and how we move from quality to learning… This was during the negotiating of SDG 4, the run-up to the final adoption of the targets, a big debate. And basically, there was a general consensus that we need to move from the single focus on enrollment and access and completion… there was some clarity on this. And this willingness to say, okay, we have to pay attention to quality. But then there was a bit of a cleavage or division between those that were prioritizing… [Those saying] okay, let’s focus on learning outcomes as a new measure that will make sure that we are not focusing too much on enrollment… Let’s also try to pay attention to learning outcomes as a pragmatic solution to have another important goal. And another constituency that feared that paying too much attention to learning outcomes would narrow down the agenda in the sense that other dimensions of quality would be… they feared that there was maybe the risk of not paying sufficient attention to other questions of input, of the process, the learning process and so on. And this was, again, a kind of debate that permeated all the SDG 4 negotiations. It seems a bit obscure, this tension in between learning outcomes and quality, as if it were two different constructs…but it was difficult to find a consensus. Finally, however, a consensus was found, and we have SDG 4. But it was a bit of a consensus on the basis of ambiguity. Because basically, the idea was to make sure that there is some language about learning outcomes, that there is also some language about quality in a more holistic or broader sense… But this was an ambiguity and a tension that very much shaped SDG 4 and that became again, very apparent when it was a moment to quantify SDG 4 targets. Because then you have to decide, okay, but what do you mean by quality? Did you mean learning, or did you mean process or what? So again, these kinds of tensions emerged during the quantification stage of the SDG 4.
Will Brehm 18:15
I’m going to read the target. I think it’s quite valuable to sort of think about how it was created and what it actually means. And then what that might then sort of mean for UIS and how they actually come about trying to harmonize all of this data, linking all these different learning assessments. So, the target 4.1, which people say that that’s the learning target, right? It reads, “ensure that all girls and boys complete free, equitable and quality primary and secondary education leading to relevant and effective learning outcomes”. So, who made that target?
Clara Fontdevila 18:47
It was very much a collective exercise. I would say that precisely because there was this debate informing or shaping this discussion, it is an attempt to create an ambiguous enough statement. Some say that it is a product of wordsmithing, as they call it….trying to refine the language until it is acceptable to all parties with different agendas. So, yeah, I would say the product of this…aAnd this is why sometimes people claim the SDG 4 reads so unnatural, they feel it is so wordy. But it is because there was this attempt to make sure we have something for everyone involved in the debate so that everyone can agree or find it acceptable enough.
Will Brehm 19:23
And so “everyone” in this case was member states that were members of UNESCO. And they were sort of negotiating this in the lead up to 2015?
Clara Fontdevila 19:31
Yeah. The member states they were part of the story, but probably the biggest tension not affected so much states, but part of the international organizations and civil society organizations involved with this. There have been some inside stories on how this negotiation went and all the difficulties of making consensus, making progress. There is this book, edited by Antonia Wulff, that does a wonderful job in explaining all the intricacies of the debate. But basically, here, there were different attempts at the very beginning of the post-2015 process as it was called…. There was an attempt by some constituencies to promote a learning-focused agenda. And there was even… I think that the Center for Global Development, for instance, proposed a learning goal. This was met with a lot of resistance later on, on the part of some civil society organizations, the Global Campaign for Education, Education International, that made this point about the risks of prioritizing or focusing too much on learning outcomes. And this is where the tensions started. And I would say that finding this consensus between international organizations was more challenging than finding the consensus between countries.
Will Brehm 20:39
Interesting. So, in a sense the target 4.1 has the words “free, equitable quality” in it but also the words “learning outcomes”. So, it’s like a nice compromise to sort of have everything. But then the target has an indicator, right? So, it’s 4.1.1 and I’m going to read it out because I think it’s valuable to sort of get this on record. So, it’s the “proportion of children and young people (a) in grades 2/3; (b) at the end of primary; and (c) at the end of lower secondary, achieving at least a minimum proficiency level in (i) reading and (ii) mathematics, by sex”. Who came up with that indicator?
Clara Fontdevila 21:20
Again, many people were involved. But basically, during the late stage of the post-2015 debate a group was created, the so-called TAGIt was a gathering of people from different international organizations. Of course UNESCO but also OECD, UNICEF and so on. It was a group of technical experts at the start, thinking about what indicators for the potential target… Because at some point, it was pretty clear what the target was going to be. So the idea was, okay, let’s think about possible indicators. The TAG, this technical group of experts, was eventually expanded to also include representatives from the civil society, member states, and so on. It was called that Extended TAG. And it was in the context of this group that more or less the indicator frameworks, including learning indicators, were refined. But it was a very long process that benefited from a lot of consultations. So, it was really a collective exercise. Sometimes people want to think that there was someone in a room thinking and making the decision, but it was much more of a collective exercise. And this is why it was so difficult, because it is a truly collective thing. And this means making consensus among different agendas, different ways of thinking. Not even an agenda, but different expertises, different backgrounds…
Will Brehm 22:33
It’s sort of like making sausage, right? You’re pushing it all together and then it’s like, oh, my gosh, what do we do with it? So, now the UIS is sort of in charge of figuring out how to get data to respond to this indicator.
Clara Fontdevila 22:46
Yeah. I mean, of course, everything is a very collective endeavor. But at the end of the day, the UIS is formally responsible for this. Not only within the education field, but we have to understand that because the SDGs are a UN thing, UIS is in many ways held accountable also to the statistical community within the UN. Which means that, of course, they have this imperative to produce data on a timely basis, to produce rigorous enough data. They are in a difficult position in this way. I mean, they have at least two masters: the education community, which sometimes is much more in favor of these more pluralistic exercises, and so on, but then they also have to report to the UN statistical community, which has another expectation for UIS. They have to navigate these multiple tensions. We wrote a chapter with Sotiria Grek on this double imperative the UIS is facing these days, between a democratic approach but also a technical and rigorous approach and so on.
Will Brehm 23:42
It almost seems like a “mission impossible”. And then the other thing that fascinates me is that the SDGs were created in 2015, they’re supposed to go to 2030. Today is 2023 and UIS seems to still be trying to figure out how to even get data to connect to this indicator.
Clara Fontdevila 24:02
Well, I would say that they are in a process of constant refining of the reporting standards, and the reporting routines and so on. And basically, they have been pushing for this approach that maximizes flexibility that really puts a premium on country ownership, making sure that basically… Countries, depending on whether they already have a national assessment or already participate in a cross-national assessment, are given different possibilities. But again, countries are put in the driving seat in many ways, and they are given the options to make use of what they already have, which I think it’s pretty important. Rather than requiring having a new test, or doing complicated things [the idea is] okay, let’s make sure we are using whatever is in there, even if that in a way compromises sometimes… or some claim that might compromise…. the technical sophistication of the final metrics.
Will Brehm 24:49
So, do some member states report data already on SDG 4.1.1, and it’s publicly available to sort of see how different kinds are…
Clara Fontdevila 25:00
Yeah. There is a lot of globally comparable learning data going on these days. For instance, the World Bank in collaboration with the UIS as a matter of fact, also has this global Learning Poverty target, and so on. So, there is a lot of data on learning progression going on. So, yeah, when you think about how it was 10 years ago, we now have a lot of data, no?.
Will Brehm 25:20
So, by 2030, do you think the global community would be able to sort of say something about SDG target 4.1 based on all of the data that’s coming in and potentially in another seven years, even more data, potentially?
Clara Fontdevila 25:35
Of course, whatever is the final say, I’m sure that some people will claim that it is not strong enough. I mean, there’s always debate on what exactly we are capturing with these indicators. So, indicators are never perfect. And I’m sure there will be some discussion, but I would say that indicators cannot be perfect. It is inevitable.
Will Brehm 25:52
And I would imagine more politics to come, right?
Clara Fontdevila 25:55
Yeah. I think that precisely the UIS feels increasingly more at ease with the politics. So, yeah, it is a very politically volatile field sometimes, but I think that they are working their way through this and avoiding the politicized character of the discussion to become an obstacle or an impossibility.
Will Brehm 26:12
You know, Clara, what I love about your research is that it’s almost like you’re pulling the curtain up to this process that really goes unnoticed by a lot of people in research because of access issues, I would imagine, right? It’s probably quite hard to sort of do what might be considered like an ethnography of a global institution. And in a way, you’ve sort of somehow managed to get into some of those spaces. Maybe you weren’t able to get into all of them. But I guess in the coming future, do you think we sort of need more of these ethnographies of global institutions?
Clara Fontdevila 26:43
Well, ethnography is a very big word, but it’s true that I’m pretty convinced that we need more empirical research of the inner workings of these organizations. So, rather than only relying on documentary analysis, which is very important in its own way, but I think that we really need to open the black box of these organizations. So, trying to observe how they work, to talk to the people working in there, actually. I think it is very important, and it would be very informative, and it would help us refine a bit of our ideas about these international organizations. Because they are such large bureaucracies, with many different interests within them, that I think that we really need to get a fine-grained understanding of how they work.
Will Brehm 27:33
Clara Fontdevila, thank you so much for joining FreshEd and thank you so much for doing this in front of a live audience today in Glasgow.
Clara Fontdevila 27:39
Thanks. Thanks very much.
Want to help translate this show? Please contact info@freshedpodcast.com
Related Author Publications/Projects
Developments, dilemmas and deadlocks in the production of global learning metrics
The growth and spread of large-scale assessments and test-based accountabilities
How and to what extent standards, tests and accountability in education spread worldwide
Learning assessments in the time of SDGs
Mentioned Resources
UNESCO Institute of Statistics
Global Alliance to Monitoring Learning (GAML)
Grading Goal Four by Antonia Wulff
Related Resources
The politics of national SDG indicator systems: A comparison of four European countries
Global learning metrics as a ready-made manufactured crisis
How close are we to reliable and global SDG 4.1.1 trend statistics?
Knowledge and politics in setting and measuring the SDGs
Tracking the SDGs: Emerging measurement challenges and further reflections
Reporting learning outcomes in basic education country’s options for Indicator 4.1.1
The many meanings of quality education: Politics of targets and indicators in SDG 4
Understanding SDG 4 on “quality education” from micro, meso, and macro perspectives
Have any useful resources related to this show? Please send them to info@freshedpodcast.com