Ben Williamson
Digital Infrastructures of Education
Today we explore the digital transformation of and in education. My guest is Ben Williamson who argues that we are witnessing a new digital infrastructure being built beneath our feet.
The last time Ben joined FreshEd was in February 2020, just one month before the UK went into its first Covid-19 lockdown. When it comes to education and technology, a lot has changed in the past 4.5 years. Ben talks through some of those changes. Ben Williamson is a Senior Lecturer at the Centre for Research in Digital Education at the University of Edinburgh.
Citation: Williamson, Ben with Will Brehm, FreshEd, 367, podcast audio, September 9, 2024. https://freshedpodcast.com/367-williamson/
Will Brehm 1:39
Ben Williamson, welcome back to FreshEd.
Ben Williamson 2:03
Thanks, Will. It’s great to return after all these years.
Will Brehm 2:06
And so much has happened since the last time we spoke. I mean, in fact, Covid happened. You know, I’m so excited to have you on the show again to sort of unpack what has happened in the world of education and technology over these last four and a half years or so. So, you know, I mean, Covid, in many ways, definitely brought together education and technology in ways that were pretty unimaginable sort of before Covid. We were not able to necessarily imagine what was going to happen. You know, how would you describe some of the major trends of what that connection between education and technology has looked like over these last four and a half years?
Ben Williamson 2:42
Oh, thanks, Will. It’s an enormous question and opens up all sorts of issues. But I suppose one of the ways that I’ve been thinking about education and technology over the last years, certainly trends leading up to Covid and then accelerated since was a kind of process of reinfrastructuring. So, if we think of infrastructure often as kind of the underlying systems we rely on for our day-to-day tasks. I think what we’ve seen is a kind of reinfrastructuring of the sort of digital systems that underpin and underwrite many day-to-day routine tasks and operations in schools and universities. So, probably, if we went back a decade or more, we’d find that, you know, our institutions were running on, you know, a bunch of digital services, and it might have been learning management systems, and we would have had student information databases, and then there have been, you know, a handful of software applications that were used for different purposes, often with in-house IT staff sort of maintaining that, making sure it all ran well, and so on. But I think particularly over the last decade, and maybe especially over the last five years, I think we’ve seen that that’s been changing, particularly as some of the big kind of technology companies have seemed to convince many educational institutions, particularly in the Global North, to run off the cloud -to run off cloud computing services. So, now our institutions have kind of outsourced all of their IT infrastructure, largely to Microsoft or to Google, you know. So, we have in schools, Google workspace or Google Classroom running off the Google Cloud, which is making new kind of Google functionality, including AI available. You’ve got Microsoft 365 running off the Azure cloud, which allows Microsoft, at the moment, to sort of bundle up these kind of AI co-pilots as additional functionality too. And then most of the third party edtech platforms and services we use are all dependent on cloud infrastructure too. To a significant extent we wouldn’t have the kind of edtech industry we have now without Amazon Web Services, which underpins a whole bunch of it, most of the functionality, particularly the kind of data operations of edtech wouldn’t be able to operate without these agreements they have with Amazon Web Services. So, this is what I mean by a kind of a reinfrastructuring. There’s a been a steady, incremental shift to becoming more dependent on big tech companies, on the cloud, which enables tech companies to change the functionality of the software services that educators and managers are using daily. So, it’s a kind of almost like a power from below that we’re starting to see emerge. A kind of infrastructural power that that you have, you know, big technology companies and some of the big edu-businesses as well, kind of underpinning what can what cannot happen in schools. I suppose, just as a quick follow up, a lot of these companies are also training educators to be amenable to all of this. So, there’s a really lovely paper from a couple of years ago where Radhika Gorur and Joyeeta Dey, where they talk about making the user friendly, making teachers amenable to what this whole kind of infrastructure, the cloud and the platforms and even now AI has to offer. This was happening before Covid lockdowns, but I think it’s really consolidated since.
Will Brehm 6:00
And I think that’s such an interesting sort of insight about a lot of what we’re seeing and what you’re describing existed before Covid. It existed before 10 years ago. I mean, you know, Apple computers have been involved in schooling and education for a very long time, but since five years ago, things radically changed and maybe this reinfrastructuring is really the way to start thinking about it. I do remember seeing a presentation by Steven Lewis about the sort of Apple education training that they provide teachers and the certification, and it’s quite interesting. I guess you know the follow up question there before we might get into any sort of -to help identify some of the issues and challenges that arise from such a new environment. So, I guess thinking through a lot of these infrastructural changes and these new companies that are coming about, you know, in what ways is it actually changing educational practice as we know it?
Ben Williamson 6:56
Yeah, that’s a really good question, and I think we’re probably only at the very beginning of in depth, on site studies to really unpack how educational practices are changing. So, it’s important work being done at the moment, particularly around you know, what do new kinds of data systems mean for universities or for schools? Or how are teachers using artificial intelligence? I think these training programs are really significant. There’s a kind of concerted effort being made by technology companies, ed tech companies, to make teachers kind of ready and amenable for these changes. So, teachers are already receiving kind of professional development from external organizations. These external organizations are increasingly a part of our education system. Now, whether that’s a good thing or a bad thing is, you know, something that we need to explore empirically. But I think you know our role as researchers is also importantly to be able to track these trends. Say these things are changing. They’re changing at a systemic level. Our education system is changing due to new kind of public private partnerships, due to government industry relationships, due to the way in which industry bodies involved in teacher preparation and professional development and so on. And I think that sets us up with a whole bunch of really important research tasks to come.
Will Brehm 8:13
This might sound like a stupid question, but all of these companies that are now involved in this new digital infrastructure that you’re describing. Are they in this business because it’s simply about profit?
Ben Williamson 8:25
I don’t think that these kinds of education or tech companies are in it purely for profit, but I do think that understanding these organizations as businesses with business practices, and business models, and revenue targets, and expectations about what kind of income is available from education is super important. We saw during Covid that there was a kind of almost like an edtech bubble as the value of education technology companies really expanded. And the same was true of the big tech companies, you know, making huge gains in the education sector. Now, a lot of that -that bubble, the kind of Covid edtech bubble, seems to have deflated somewhat. You know, there’s some really big examples just at the moment -2U used to be this enormous edtech unicorn is practically bankrupt. BYJU’S the most valuable edtech company on the planet, sponsored the World Cup only less than two years ago now seems to have virtually collapsed. So, I give those examples because that really indicates that it’s a set of business practices, a set of shareholder expectations, or what investors are pumping into companies that really maintains these educational technologies. So, whether we see the companies as purely in it for the profit or not, it’s kind of immaterial. It’s like what they’re actually resting on is a whole set of business practices and market volatilities that will have downstream effects on the kinds of technologies we can or maybe that we can’t use in schools.
Will Brehm 9:54
I recently read the book “Techno Feudalism” by Giannis Varoufakis. I don’t know if you’ve read this. Book but the thesis is more or less that these big cloud companies like Amazon and Apple and Spotify, they lock you into using their services, because once you sort of get into their services, it’s really hard to then migrate over to any other service. And more importantly, you often like what those services can do for you. Like Spotify can give you the exact song you want to listen to next, and if you move over to Apple Music, it’ll take years before Apple music’s algorithm can do that. And so, there’s a bit of pleasure that people get from being sort of integrated into these cloud systems. But he sort of goes on to say that this is like a feudal relationship that we have with these companies. And in fact, he thinks it’s beyond capitalism. For a lot of these companies, it’s not about making profit as such. It’s about sort of extracting rent, like in a feudal system. And I wondered if you see any similarities of that argument to, you know, the way in which schools or other educational institutions are being sort of integrated into this new digital infrastructure,
Ben Williamson 11:05
Yeah, for sure. I mean, I think the notion of rentierism or rentiership as a kind of business model in the ed tech sector is super strong at the moment. There’s some really important work being led by Janja Komljenovic on this in particular. But the idea, of course there is that education businesses, edtech companies, big tech companies too, have begun to see education as a site in which they can create assets. So, they create platforms, or they extract data out of those platforms and turn those things into assets, which means those assets will be able to generate long term income. And the way they do that is two ways. There’s the kind of monetary route, so you demand ongoing subscription fees from universities or schools, and you expand the various features and upgrades and so on, so that those institutions become virtually locked in. You know, they can’t easily extract themselves except at great expense. So, there’s a kind of ongoing rent-based relationship that is formed there through kind of subscriptions and that route. And then there’s the, what’s sometimes called the data rent extraction as well, where because of these long term subscriptions, these companies are able to extract usage data. Not necessarily the personal data, not the sensitive, you know, data that we get, you know, very concerned about, but the usage data about how the platform’s being used for how long, what sort of clicks, you know, what seems to generate metrics of engagement and all of that kind of stuff. That data can then be turned into derivative products or into feature upgrades, those kinds of things, so that the data becomes a source of value too. I think this is why these companies, they really want to get into these long term relationships with companies. They don’t want to sell, you know, a CD ROM with the software on it that you just install, and then the deal is done. This is about having a long term relationship from which the vendor can guarantee a long term income in terms of the subscription fees and in and in terms of the data.
Will Brehm 13:19
I’m just reflecting on my own experience in universities. I just see this everywhere and it is -I don’t know, I’m frightened by it. Are you? Like I’m frightened by using something like Canvas and all of the different applications that go into sort of my teaching practice now, as being -I don’t know. I just, I worry that, you know, we are being restricted in the way in which we can think about how to deliver our teaching and learning. How are you reacting to it on sort of a personal level?
Ben Williamson 13:48
I think I share those concerns. I think what particularly worries me is the potential for a kind of erosion of teachers’ professional pedagogic autonomy, that when we increasingly use platforms or various other services for our teaching, it sets the template for what we can or can’t do. The learning management system sets a particular template for what kinds of tasks can be set on the learning management system, or how many readings you have to assign, you know, in advance, and so on. So, it really is about a kind of setting, a set of standardized templates for forms of teaching. Now, of course, it can’t constrain everything. These technologies don’t determine what we do, but institutions are increasingly saying, you know, everything must be on the learning management system, and now we’re at a moment where more and more parts of these platforms have automated features. So, there might be some kind of adaptive learning feature, you know, kind of advanced stage, or it might be that there’s just some analytics running in the background. All of these things, I think can subtly from below -it has a kind of infrastructural layer shape what we do. And I think as educators, we’re increasingly then having to think really hard about what does this mean for my teaching now? What might it mean for my teaching in five years’ time if current kind of trends towards automation and so on, which seem to be, you know, developing at the moment. What would this mean for my teaching then? How would my pedagogic autonomy be affected? And then you can think institutionally as well. You know, this stuff gets into the kind of the institutional structures of educational institutions too. So, administration becomes focused on the kinds of data traces that are available from the learning management system, which might be combined with the student information system in something like Blackboard or Canvas and so on, which makes all sorts of so called insights available, which can be used for decision making. What are the longer term potentials as maybe some of that decision making becomes more automated? Does that have implications for the ways in which students are kind of categorized, classified and maybe kind of targeted and acted upon in some way.
How do you approach some of these questions in your own teaching? You’re about to start a new semester, or maybe you have started a new semester in Scotland. You know you’re going to have to deal with some of these learning management systems. You’re going to have to deal with generative AI in the university. How do you navigate it as sort of an individual who has such deep knowledge around this whole sort of area from pretty critical lens like, you know, what’s your approach to addressing this, or at least navigating it day to day?
Well, I teach on two programs. One of them, it’s entirely online. So, I am a digital educator. You know, I genuinely believe that for certain sorts of courses, ours is a course for professional educators, that online, asynchronous, intermittent courses can be fantastic, you know, they open up great opportunities for others. We have been quite careful in the setup of programs about the learning management system we use for one. So, it’s a kind of open source alternative where we don’t engage with all of the data aspects. And I think we’ve designed the program and the individual courses to maximize what’s possible within those, you know, the constraints of the platforms, but whilst also being significantly cognizant of the kind of, you know, the possible implications of these kinds of platforms. I teach on another course in a new institute at the University of Edinburgh, where we do have a learning management system which could potentially be seen as slightly more problematic. So, I’m kind of interacting with that cautiously, particularly around the functionality which is becoming available in that platform, although I’m not sure it’s switched on in my institution, but the functionality for automated feedback. So, within this particular platform that we’re using, that is a function which is available and is in use at some institutions -I’m not sure how widely- that would basically allow us as educators, if we really wanted to, to offload responsibilities in terms of giving feedback. I think the really significant issue with something like that is, what happens when there’s an expectation as an educator that you use that kind of automated function? What happens if you’re in an institution where your time is so constrained that you end up having to do it out of necessity, because, you know, teacher shortages or educator shortages, you know, not recruiting enough people to do the grading and so on. So, while I think, you know, we can use these systems in interesting and often quite creative ways, I think there are some features of them, which, as educators, we do need to think really hard about not just now, because maybe out of necessity we need to grade all these assignments, but longer term implications. I really wonder, if I was a student at University where I was paying some fees and I discovered that my assignments weren’t even being graded or having feedback written by one of the humans, I’d really wonder what the value was, or whether I, as a student, was just the source of the value.
Will Brehm 19:05
Recently at my university, I saw this same function being shown. I don’t think it was turned on yet, but they were sort of showing what it’s capable of. And yeah, I was surprised, and I was very cautious in thinking about how I might use that or not use that. And I think you know, kind of what you’re saying here is that it’s both an individual issue, like, you know, every lecturer has to make these sort of decisions, but there is a larger collective issue of being in a profession, and what we expect from that profession.
Ben Williamson 19:38
Yeah. I think that’s right, and I think we’re probably at a very difficult moment collectively. Even within individual institutions, I don’t think there’s collective consensus about what to do about many of these new AI automated features. I think we’re probably in quite a good position in universities. We’re staffed by people who are experts in tech. Many of our institutions are staffed by people who are kind of experts in the social and political and economic implications of technology as well. And many of our institutions are forming task forces, working groups and so on to try to resolve some of the significant issues, whether it’s -obviously most of the concern at the moment is around students turning in AI written assignments. And I think, you know, there is an issue about what kinds of guidelines can we collectively agree at an institutional level, and how do we kind of scale that up so that there is amongst kind of professional community of educators in higher education some kind of agreement about sensible way to -not just on the students use of AI, but I think, also on staff use of AI. And to what extent do we push back, or should we push back, on some of these new features that are being rolled out into the platforms that have already, you know, as they become infrastructural to our practice. We usually can’t teach without the LMS. Oh, right, that LMS now allows us to do automate grading. Ooh, there might be some people in senior management in some organizations who will see this as a great time saving and potentially cost saving opportunity. I think we need to be super, super aware and cautious and form into some kind of collectives to push back where necessary.
Will Brehm 21:21
Another area that you’ve highlighted recently, in a similar vein, is about how some AI models are being trained on or with publications that academics wrote and then get published by the five big publishers in the academic space. You know, talk me through that. What’s happening in that space? How are academic publications training AI models and what are you worried about in that space?
Ben Williamson 21:50
Yeah, okay. So, yeah, the use of AI in academic publishing is, I think, part of a longer trend, obviously, which is that you become completely dependent on commercial publishers, and we’ve kind of had a relationship where they are kind of stewards of our work. We depend on them, and we’ve suddenly reached this stage which has only really become apparent over this summer -though it’s something maybe we might have expected, given the way that generative AI has been built and the way it’s been trained is that the academic publishers are, almost all of them, signing partnership agreements with AI companies so that academic content, chapters in books, whole books, journal articles are being made available so that tech companies can train AI models. And at the same time, the tech companies are also working with the academic publishers to build new kind of AI functionality for academic research purposes. Most of this is pretty unclear about exactly what’s happening. And I think that’s one of the things that does really concern me, is that there’s a bit of a lack of transparency. In the same way that, you know, there was quite a lot of lack of transparency amongst the big kind of AI companies about exactly what the training data was that they’d used. There’s now a bit of lack of transparency by the publishers about exactly what they are making available to the big AI companies. I saw somebody on social media describe this as academic content being turned into a bunch of assets that are then rented out to Microsoft and the rest. So, you know, I think we’re all kind of, you know, enrolled in exactly the same set of issues as the kind of the student data problem. You know, it’s similar sets of issues here. I’m an editor of a journal published by Taylor and Francis. We already know that Taylor and Francis has made Routledge books available to Microsoft as part of the partnership. We also know Taylor and Francis has another AI partnership, but they haven’t said who it’s with. So, as editors of a journal, we are at the moment trying to liaise with our publisher to get greater clarity. We are understanding at the moment that Taylor and France’s journals are not included yet in the training data, but we want to get clarity about whether that’s the case in the future. It’s a long term partnership over three or four years. I think there’s this -I feel a slight moral obligation as an editor of a journal where we publish quite a lot of critical work around AI, to be able to say with confidence to authors whether submitting to our journal means that their paper is going to be ingested into GPT 5 or to another model. There is, I think, a deep sense of unease and concern that this is kind of the latest stage in a relationship that’s become more predatory on the part of the commercial publishers that this feels very exploitative. It feels like the value that is to be made out of academic work is going to be gathered by large tech companies, and, you know, with better models and by the publishers who have already, you know, been saying we’re going to make tens of millions out of these arrangements. There’s some whispers from the publishers that some kind of increased royalties will be available to academic authors. I think many academic authors would question, you know, what that might look like, given we rarely get any royalties at all. I think we freely give our labor. We’ve depended on these publishers to act responsibly, whilst fully aware, many of us, that this is an asymmetric relationship. I think we’re at a particular moment where that kind of thin trust that’s existed is in real danger of breaking. As an editor, I really don’t want to see people starting to boycott either our journal or any others, because our fields depend on these systems that publish our work and so on. You know, our knowledge sharing depends on these systems. But I think, again, as an academic collective, it might be useful to express our sense of unease to these publishers, and certainly at the moment, to really demand clarity about exactly what is and is not in these training data agreements.
Will Brehm 26:16
I mean, I guess to end, I want to reflect on something I heard Neil Selwyn mention last year at a conference. He said something to the effect of generative AI is neither socially sustainable nor environmentally sustainable. And he was really sort of advocating more or less a pause, like we need to really just stop going down this road and think through the costs, both socially and environmentally, of what it would mean to use generative AI in our everyday lives, in our teaching, in our classrooms, in our universities. And I’d just be interested to know, where do you sit within that sort of debate as an ending to us catching up after four and a half years.
Ben Williamson 27:01
Well, earlier in the year, I co-authored a report about AI in schools with the title “Time for a Pause”. So, I kind of go along with Neil’s argument there that there are reasons to, I really think, you know, put a bit of a break, rapid roll out of AI across all of our systems, whether in research, whether in teaching, whether for studying and learning and so on. And I think there are many reasons for that. Again, earlier in the year, I wrote a blog post with 21 arguments against AI in education. I won’t rattle through them all now, but I think the environmental argument is a particularly strong one. You know, it’s become pretty clear that the energy demands of AI are super intensive, it guzzles an awful lot of water, often in areas that really require it. So, I really like the argument that was made by Felicitas Macgilchrist, that we should be putting the emphasis on rewilding education, rather than rewiring education with AI at the moment as a kind of, you know, real recognition that, you know, maybe we need to put significant limits to the kinds of technologies we’re using, because we should have some kind of responsibility to the environment. And the kind of social question, I suppose I’ve always thought of AI as not just some kind of alien technology which is coming into our institutions and doing stuff. And it’s not just a product of technology companies who see some kind of opportunity. The real risk of AI is it kind of interweaves with a whole bunch of other factors, and that includes political priorities. So, in the UK at the moment, just in the last week, in fact, the Department for Education for England has announced a 4 million pound project to create a content store to allow AI companies to build better models, and as part of that, a Catalyst Fund for ed tech companies to use those AI models then to develop prototypes for feedback and marketing to be used in English schools. So, we’re at a stage now where we have technology companies getting into relationships with government departments to address what are seen as policy problems. And a particular policy problem from the perspective of the Department for Education, is teacher workload. So, you know, we have a teacher shortage in the UK, as there is in many other countries, we have a teacher recruitment problem -we can’t get enough teachers in. Teachers are working crazy long hours. So, the idea is you would have these kind of language model systems, maybe in helping in lesson planning, and in marketing, and in providing feedback and so on. So, that’s supposed to be a kind of technological solution to a problem which is actually a really massive political and economic problem, which is austerity in schools, which we’ve experienced in the UK, and it will be the same elsewhere for many, many years, and it’s just a technical fix on top of that. So, I’ve been thinking of this in terms of kind of automated austerity schooling that we’re now introducing automation to seemingly address the problems of austerity in schools, but at the same time, it’s just going to reproduce that austerity, because it’s not really addressing any of the root causes of teacher recruitment, teacher workload and so on. It’s just putting a kind of technological fix on the top, and in doing so, it’s turning, you know, a really macro problem into a micro problem that’s then amenable to technology and AI companies to fix that’s likely to have long term implications. If we don’t really fix some of the real problems with schools, we’re going to end up with layers and layers and layers of more automation and AI, not just in marking and in feedback and lesson planning, but in all sorts of other aspects of schooling. I think it’s massively important that we try and put a bit of a pause on that until we’ve really looked into is any of this stuff actually any good? Do we really need it? Is it necessary? I don’t think we know the answers to these really, really basic questions yet.
Will Brehm 30:56
Well, Ben Williamson, thank you so much for joining FreshEd again, and thank you for all your work on looking at education and technology over the last many years, but definitely over the last five years.
Ben Williamson 31:07
Thank you very much for having me on the show, Will.
Want to help translate this show? Please contact info@freshedpodcast.com
Related Guest Publications/Projects
Re-examining AI, automation and datafication in education
Amazon and the new global connective architectures of educational governance
Future visions of ‘learnification’ and ‘datafication’ across humans and digital technologies
The datafication of teaching in higher education
Code acts in education: Automated austerity schooling
Big data in education: The digital future of learning, policy and practice
Mentioned
Amazon Web Services (AWS) for Ed-tech
Making the user friendly – Radhika Gorur
An Apple for teacher (education)?) – Stephen Lewis
Techno-feudalism: What killed capitalism – Yanis Varoufakis
The rise of education rentiers: Digital platforms, digital data and rents – Janja Komljenovic
Data as asset, data as rent?: Rentiership practices in Ed-Tech startups -Janja Komljenovic
Publishers strike data deals with AI companies: What it means for academic authors
The SoA responds to Taylor & Francis Group’s sale of data to develop AI
Time for a Pause – Ben Williamson
Rewilding Technology – Felicitas Macgilchrist
Teachers to get more trustworthy AI tech – UK government’s new project
Recommended
Understanding the role of digital technologies in education
How are teachers using AI to do their jobs – Education Week Survey
AI-adaptive learning, subscription strategies, and the quest for global online MBA accessibility
The logic of Ed-tech: Three critical directions
How platformisation affect pedagogical autonomy in primary schools
Governed by Edtech? Valuing pedagogical autonomy in a platform society
AI: An introduction to generative AI and its use in teaching, research, and publishing
The uneven distribution of AI’s environmental impacts
Evaluating the social impact of generative ai systems in systems and society
Humans versus Chatbots: Scaling-up behavioral interventions to reduce teacher shortages
Have any useful resources related to this show? Please send them to info@freshedpodcast.com