Mary Burns
Students in an AI World
Today we conduct a premortem of Generative Artificial Intelligence in Education. My guest is Mary Burns.
Mary Burns is an internationally recognized expert in teacher professional development, online learning, and educational technology. She is the lead author of the new Brookings Institution’s study entitled A new direction for students in an AI world: Prosper, prepare, protect.
Will Brehm 1:54
Mary Burns, welcome to FreshEd.
Mary Burns 1:55
Great to be here. Thank you for inviting me.
Will Brehm 1:57
So congratulations on this new report. It’s substantial. There’s a lot in there. And I’m looking forward to diving into AI and education today. I guess to start, I’m really keen to understand how you use AI in your everyday life.
Mary Burns 2:11
You know, that depends on what I’m doing, I guess. So I’ve used AI a lot for my own work in instructional design and course design, video, audio, teaching my university colleagues how to use certain educational AI apps. I use it a lot for translation. I actually speak a few languages and I use it for help in the language that I speak. Obviously, for the report, we used it as an object of study, which I’m sure we’ll discuss in this interview. You know, I’m using it for research, for writing, for a lot of condensing information, reorganizing information.
Will Brehm 2:47
In your own experiences, what’s the most annoying thing about AI?
Mary Burns 2:52
I mean, AI is extraordinarily helpful. I mean, it’s an efficiency tool. It’s a productivity tool. I think what I find most annoying about it is the intrusion of whether it’s Copilot or Gemini or Adobe offering essentially to offload my work for me so I don’t have to read something, I don’t have to write something, I don’t have to think for myself. I don’t know if you remember Clippy, Microsoft’s really awful office assistant 12 years ago. It reminds me a lot of that. And what annoys me is that the onus is on me to reject this help. So, you know, and I guess and I typically do, I like to write and I like to write in my own voice. But what concerns me is that writing represents the interface most vulnerable to this AI intrusion, which really heightens the risk of artificial speech displacing genuine speech and automated output supplanting authentic teaching and impersonal exchanges replacing our personal interactions. And I find this annoying, but I also find it harmful, especially for students.
Will Brehm 4:06
Exactly. And that’s what we’ll get into. I mean, just on that Clippy, you know, analogy, I totally agree. I mean, there’s this like eagerness of AI that is so annoying to me because you have to like stamp down someone else’s or, you know, this thing, this technology’s like total enthusiasm for whatever I’m doing. And I just I hate to sort of stamp down someone else’s eagerness. And I know that’s sort of anthropomorphizing this AI, but that’s sort of that’s what I feel like I’m doing all the time.
Mary Burns 4:32
Yeah. Yeah. It’s like an algorithmic Irish setter. It just wants to please us constantly. And it’s kind of, you know, just gets a little too enthusiastic, way too enthusiastic.
Will Brehm 4:43
OK, so turning to the report, you this report sort of positions itself as doing what you call a pre-mortem rather than a post-mortem. Now, you know, AI hasn’t been around that long. I mean, let’s say generative AI and ChatGPT, you know, it seems like it’s a relatively new phenomenon, a couple of years at this stage. Why should we be doing or why did you decide to do a pre-mortem of AI?
Mary Burns 5:07
Well, the pre-mortem idea was actually the idea of Rebecca Winthrop, who’s the head of the Center for Universal Education and was very involved in this report and is the second author. And, you know, she wanted her work has been around children and their emotional and cognitive well-being. And her big concern was, you know, why should we wait a decade to see what the harms of AI are like, like we have with social media? Let’s jump right in. And so she came up with this idea, I think, in consultation with folks at Stanford to do a pre-mortem. So we all know what a post-mortem is. It’s, you know, an ex post facto analysis of why failure occurred after the body is dead, essentially. So pre-mortem comes out of the technology and business worlds. And it’s really a perspective analysis that examines how an innovation or a technology project could fail. So what you essentially do is construct a future scenario in which you say outright our innovation has failed. And you ask why. And you bring together a diverse group of people from within a field and they brainstorm ideas, they discuss them at length, they cluster and rank them. And so we did all of those things. It’s a really powerful technique.
And so, you know, I think I mentioned that Rebecca was the impetus for this, but, you know, another impetus is kind of the technology surround or context, if you will. You know, we know that every technology yields both positive and harmful effects. And, you know, an example is cars, automobiles. You know, they offer us extraordinary mobility and independence, but they’ve ravaged the built environment, they’ve damaged air quality. And so given this dual nature of technology and its unforeseen consequences and the challenge of predicting its trajectory, you know, even for tech developers themselves, I think the idea was that we really have to approach these claims around AI very cautiously and proactively and address potential harms, especially as these new tools gain widespread adoption.
And, you know, I should mention one more thing. I mean, I think a big opportunity with this pre-mortem that it presents is that we know that AI, as you said, is relatively new. Generative AI is relatively new, at least in the education space. It’s still evolving. The rates of adoption vary across education systems. They’re not fixed. So we have a window of opportunity to examine its harmful effects and more importantly, to help influence thinking and behaviors and policy around AI in education. And I think this is a big theme in the report.
Will Brehm 7:34
Yeah, exactly. I mean, there’s still time to respond and put some guardrails and limits and minimize these potential harms or harms that are already being identified. I like the idea and I like the approach, particularly because it’s a nice counter or counterweight to the sort of techno-solutionism enthusiasm that you hear from all the tech bros that are going out on the news that, you know, AI is going to save everything and change everything and it’s always going to be positive and better. And it’s just actually nice to read something that’s a little bit more level-headed, that isn’t just sort of starting from, let’s be enthusiastic, kind of like the AI. You’re kind of, you know, you’ve done the opposite where it’s like, let’s actually be a little bit more realistic. And starting from that point of what if it fails, what could we say about it?
And so I guess I want to start, you know, because the central finding that you have, like the conclusion to kind of give it away, is that the harms outweigh the benefits in the report. That’s kind of what comes through pretty clearly. So I guess I want to understand how you weigh that up. Like, maybe we should start with the benefits. Like, how do you see the benefits of AI? And then we can turn to what the harms are and then how you actually do that sort of cost-benefit analysis to come out with it’s more harmful than beneficial at this point.
Mary Burns 8:47
So this might, you know, explain the last part of your question. So, you know, this study was essentially qualitative and it has several levels. And, you know, we had over 500 primary sources. So we interviewed about 500 different people across 50 countries. So that’s our primary data set. So it’s coded, it’s, we develop themes, et cetera. And then, you know, we pull out the main themes and we triangulate that against the research. And what is the research saying? So then we develop more and more findings and more themes. And then we present it up another level to a Delphi panel. We had 21 AI and education experts who actually then kind of commented, ranked, poked apart some of these major themes. And then we wrote up the findings and then we presented it to a steering committee. So it had several levels of validation. But, you know, it’s not a quantitative study. And I would say that’s not unusual right now because AI’s, gen AI in the education space is about three years old and the research always lags behind the technology.
So, you know, the big benefits really came out of the 505 interviews. And the big things are, of course, productivity and task replacement. That’s what gen AI does really well and kind of overwhelmingly that. And then we saw things like improvements in reading and writing, especially for second language learners, tutoring. This is where the strongest research actually lies is in tutoring and assessment. And then the risks that we discovered were cognitive decline among students, the undermining of student well-being, threats to trust and social cohesion, increased dependence on AI and risks to students’ online privacy, safety and security. And interestingly, equity emerged as both a benefit and a risk.
And, you know, one thing, well, I just want to mention is there were no typically when you hear about risks and benefits, you kind of think of camps and either a very binary discussion. So, you know, I did the majority of interviews. I didn’t do all of them, but I did the majority of them. And I have to say that the people I interviewed are enthusiastic about AI. They use it. They like it. They find it incredibly helpful, especially teachers. And yet, despite this enthusiasm, their concerns far outweigh the benefits that they identified. So there was no either-or division among individuals. There was no camp of AI supporters versus AI skeptics. Instead, what was really interesting to me is it was a kind of both-and complexity. So the dichotomy really exists within each person rather than between people.
Will Brehm 11:31
Yeah, personally, I feel that’s exactly how I feel. I feel so conflicted on it. And, you know, it’s hard not to use AI. It’s hard not to use AI today. I mean, even if you just go on Google and do a search, it’s sort of there. And it’s sort of entering so many parts of my life. And I have found that it is making some tasks sort of easier. You know, I also see that it has this negative effect on students that, you know, some students can use it as a tool and help them in the learning process. But then others sort of just kind of capitulate all of the learning to it because they’re just sort of trying to get over the assessments. And, you know, like higher education is simply a form of credentialism and the student doesn’t really want to learn. And I don’t think that’s all students. But I think AI sort of has allowed students who feel like that an easy out. And then it sort of takes over.
And then I get conflicted with like universities both sort of saying simultaneously AI is really dangerous and we need to protect our assessment. And then also at the same time, we need to start cutting deals with AI companies because, you know, this is going to revolutionize everything we do. And it’s just like this cognitive dissonance that is really, for me, really hard to sort of hold in my head at the same time.
Mary Burns 12:46
Yeah, you know, there’s a lot of cognitive dissonance around AI. I have it myself. And, you know, I think the important thing to ask is why do these risks outweigh the benefits? And I think, you know, there’s a number of interrelated reasons, but I think two are important. And one is, you know, the design of the tools, because most teachers and students that we were talking to are using general purpose AI tools. They’re mainly using free versions of LLM applications like ChatGPT. And these are not designed for education purposes. So and then outside of class, you know, we found that a lot of kids are using these companion bots and they’re also not designed for educational purposes. I mean, we can argue this point, but both of them, I’d argue LLMs and these companion bots are both designed for attachment.
And so I think the second reason, you know, about why do the harms outweigh the benefits is really developmental because kids’ brains are developing. They’re undergoing this crucial processes of neural pruning and strengthening that depend on repeated cognitive effort and struggle. And they lack the metacognitive skills, the critical thinking abilities, the neurobiological maturity of adults. I should mention that we were looking at kids under 18, ideally 13 to 17 year olds, because 13 is the age of internet adult, but younger kids are using AI. And so I talked about the benefits, an extraordinarily powerful and productive tool. So this amplifies adult expertise, but it can really undermine the cognitive development of kids because they don’t know enough and they aren’t developed enough, actually, in many cases to use it.
You know, that’s where a big finding in the report is the degree of cognitive offloading, because cognitive development requires effort, requires making mistakes, requires problem solving. And these are processes that AI very easily circumvents. And many students, like you said, they’re more than willing to bypass these. And there’s one quote in the report that I just keep coming back to. And I’ll just quote it, which is used liberally. AI is not cognitive partner. It is a cognitive surrogate. It does not accelerate children’s cognitive development. It actually diminishes it.
Will Brehm 15:01
Yeah, I mean, I’m not a cognitive psychologist, but I can really appreciate that finding. What about the sort of sociological side of education? What, you know, students and their social relationships with each other rather than, say, a chatbot and then also the relationships that students have with teachers in particular? Like what was the report finding when it came to some of these like sociological benefits, challenges, harms? Like, how do you understand that side of it?
Mary Burns 15:27
Yeah. So, I mean, you know, a big theme in the report was cognitive offloading. But I think, you know, we also found extensive evidence of kind of emotional and social offloading. So I’ll think of two data sources. So one is Common Sense Media, a group here in the United States, did a survey in 2025, I believe, and it found that adolescents from 13 to 17 are using AI extensively. I can’t remember the exact percentages, but to basically negotiate, you know, to script like awkward conversations or to write things, you know, texts that they used to do themselves to essentially outsource emotional labor, you know, to offload emotional labor. And so I think a big concern in the report is that young people are using AI to offload this emotional labor, to avoid all these uncomfortable, unpleasant emotional experiences that are actually critical to becoming an adult, you know, just ordinary interactions with people who are different from you or kind of navigating embarrassing social situations or developing emotional intelligence, you know, criticizing, being criticized, you know, making faux pas, saying stupid things, learning from mistakes. So that’s one source.
And then teachers and parents spoke to us a lot about kids’ social offloading, you know, how kids are increasingly interacting with AI companions and LLMs like, you know, ChatGPT rather than their classmates. And even when they’re in the same shared physical space. And so, you know, I had teachers talking about kids who are sitting in group, but everybody’s interacting individually with large language models. And parents saying, you know, my kid is spending hours in their room talking with AI companion, these algorithms.
And so, you know, this obviously begs some questions about the quality of adult supervision, but I think it also shows how seductive these AI companions and the LLMs are really for kids and for many kids, not every kid. And so I think this offloading, you know, it just has a lot of effects, including on learning, because learning is very socially constructed. If you’re engaging in social interactions with algorithms, essentially, then there is some kind of social substitution going on. And I think, you know, what people were telling us is as kids are interacting more with the algorithms, they’re interacting less with each other. And there’s research showing this, that kids, you know, university students, for example, are engaging in less collaboration and what they call help seeking as a result of AI use.
I mean, this is, I guess we can talk some more about this, but this is really dangerous because, as you said, I mean, they’re incredibly sycophantic, these chatbots. They’re designed to be sycophantic. We anthropomorphize them and the developers do as well. They use chatbots in particular, the emotional chatbots use long-term memory to simulate intimacy, to allow, you know, you can create attractive avatars for fantasies and indulgence. They’re frictionless, they’re easy, they’re idealized compared to the demands of human relationships and the research on social media already shows people turning to online companionship because they find it much more attractive than in-person attachments or because they’re lonely.
Will Brehm 18:37
Exactly. I mean, that idea of like companionship with something online is not new from AI, but it’s sort of accelerated because it’s so now ubiquitous AI. What about the relationship between students and teachers? Like, did you find any evidence that AI is sort of impacting that relationship? Because that seems to be such an important relationship when it comes to, you know, education.
Mary Burns 18:57
Yes, we did. And I have to say, this is the part of the research, the interviews that kind of surprised me the most. I wasn’t expecting this. So one thing is, you know, over the last several decades, I mean, I think in part because of social media, we have seen a decline in trust and expertise and institutions, other people. And yeah, AI is exacerbating this. It’s creating what I keep calling a web of distrust within many educational environments. So teachers would say openly, like, I don’t trust that the students’ work is genuine. And students say the same thing about teachers and their classmates and parents. You know, teachers tell us that parents seem to regard AI as a better arbiter of their child’s achievement and output and performance than they do the teachers themselves.
And I think the other thing that social media started, and I think AI will exacerbate, is this dissolution of, you know, common or public knowledge. Steven Pinker just wrote a book about this, you know, that knowledge is becoming less public and more privatized because of the way we’re accessing it within social media. And the thing is that common shared knowledge allows us to coordinate, humans to coordinate. It’s foundational to trust. So people with common knowledge, we can benefit each other with complementary choices that we’d have no confidence in otherwise if we were using private knowledge. And personal relationships are cemented by common knowledge.
So we see this, what teachers and kids are telling us are happening, is we see this distrust in terms of relationship because the assumption is that everybody’s using AI. And then because we’re in our own bubbles and AI is sycophantic, it will generate essentially whatever you want it to generate. We see that knowledge is no longer fixed. And so I think kind of epistemically, when you start to lose trust in humans and information, it needs to go somewhere. And I think AI is benefiting from this deferred trust because when humans distrust other humans because they see them as biased or unreliable or unknowing, then AI becomes kind of a compensatory cognitive mechanism. And my example, again, is parents, certain parents saying, well, ChatGPT gave my kid an A and you only gave my kid a B minus.
So what’s interesting to me is that, you know, most people that I interviewed said that they don’t trust AI’s output, but they know that probably this is the right thing you’re supposed to say to a researcher. But many of them really seem to accept AI’s outputs without further interrogation. And, you know, in my interviews, it was clear that AI for many students and parents is perceived as more competent and neutral than the teacher. So I just go back to this idea of trust because I’m concerned about this because, you know, I live in a low-trust society right now where certain groups no longer trust other groups. And I think AI is exacerbating this lack of trust in people and in information.
And in some ways, this should be no surprise because technology companies have traditionally capitalized on trust transfer, you know, this idea that trust is accumulated from previous experiences and then it can be extended to new agents or contexts like social media sites or AI, for example.
Will Brehm 22:10
Yeah, and I guess that’s where when AI and other technologies that sort of, you know, meddle in social trust, when it takes place inside a social institution, inside a public good like education, inside schools, it becomes incredibly problematic. It’s sort of undermining some of the very goals and values that a public education system might have.
And I guess, you know, why is it that public schools or just schools in general, education institutions in general, why is it that they seem to always fall for sort of this technical hype? Like the technology, whatever new tech is out there is going to solve all of our problems. And it seems like AI is just the, you know, most recent iteration of it. I don’t know if it’s sort of qualitatively different because of the speed and the scale that it’s sort of come out. But it just seems like education institutions just sort of keep falling for the same trick, if I can put it that way.
Mary Burns 23:00
I mean, the first thing is, I just think the hype around AI is just a continuation of classic hype cycles around one-to-one, you know, laptop programs, the Internet, social media. So I’ve been in EdTech since 1997. So I’ve seen this before. I mean, we’ve all seen this before, I guess. So I think the big difference between AI and earlier versions of educational technology is, one, that the technology is different. I mean, in the past, technology has been constrained by menus. And right now, this is, you know, activated by natural language. And in many ways, it’s very impressive. It’s helped me. It’s helped all of us, which is why it’s so seductive to use. But this rhetoric is way more amped up. I think the expectations are way higher. And I think a lot of that is that there’s far more investment, you know, trillions of dollars of capitalization, and then there’s a need for a return on investment. And we see this far more with AI than with other technologies.
And I think, you know, another reason is the schools are just under extraordinary pressure to adopt AI, probably because there is so much investment. And that pressure is external, you know, it comes from a whole-of-society approach. You’ve got big tech government, you know, where I live in the U.S., Trump’s first executive order on AI moved from mitigation of its risks to complete deregulation. And a subsequent executive order says that schools have to use and teach, excuse me, AI. But that pressure is also coming from the business community, which I think is often, you know, viewed schools as job training institutions, like an extension of the business world, preparation for the business world. And then the pressure is coming internally as well in the educational world, because the parents and education consultants and school leaders, you know, they’re all pushing for using AI.
So I worked in international development for 20-plus years on USAID projects around the globe. And the best way to get a minister of education to show up at your school is to open a computer lab, because technology is the greatest prop. It is new, it is shiny, it promises modernity. And you know, I keep thinking about my own interviews with parents over the years around technology. I’d have kids say to me, like, these smart boards are ridiculous. Get rid of them. And I’d have a group of parents together, and they’d say, no, no, we want a school with smart boards, because that shows, you know, modernity.
Will Brehm 25:17
That idea of, you know, giving computer labs and having the minister come, like, I feel like I was in that very situation, like 15 years ago in Cambodia. And you go in and you see these computers, and I hate to say it, it often had like USAID, you know, the logos all over it. And I’d been in the schools, like, after the minister comes, and these computers just collect dust, they don’t get used because there’s no power, you know, electricity to the school. It became a running joke for all of the people that were like working in Cambodia, you know, these computer labs that were just, like, actually, in the end, they were just taking up space that otherwise could have been a classroom. And it was just being, you know, there was a lock on the door, because we had to keep these things safe. And so no one can steal them. But they just sort of collected dust. Do you think AI is going to be similar? Like, is there going to be this, like, you know, digital dust collecting on AI? Because it’s like the new thing that we’re going to invest in?
Mary Burns 26:09
Yeah, I mean, I have to say, you know, I keep thinking of that H.L. Mencken quote, you know, for every complex problem, there’s a solution that’s clear, simple and wrong. And I mean, technology in education has been, has long been seen as the solution to whatever problem plagues the education system. And you know, the problems that plague the education system are often human and institutional. And you know, these going back to my career, I mean, I’ve worked with a lot of ministers of education and decision makers and you know, these folks are, for the most part, ministers of education are engineers, they’re economists, they are not educators. In our case, in the United States, our Secretary of Education is a former World Wrestling Federation executive. And you know, these people believe deeply in technopoly, you know, Neil Postman’s term for technology is the ultimate authority and solution and they see machines as the solutions and they see humans and especially teachers as the problem in their systems.
And so, you know, I just think humans want, we want activities that happen fast, we want solutions that happen fast. You know, if you’ve got a lot of problems, you’ve got people who are heading up the system that have no background in education, who have no children in government schools because they send them all to private schools, who see technology as akin to a silver bullet, who culturally probably have more in common with tech entrepreneurs than they do with teachers, who are much more likely to listen to tech vendors than they are to teachers and who have confirmation bias and who just want the path of least resistance, then yeah, I think AI is going to be absolutely no different because the whole origin story of technology and education has been a continuous replay of sins of omission. So the failure to plan, the failure to create and enact policies and frameworks to guide the integration, the failure to prepare and support teachers, the failure to put supports in place like a door that locks, plugs, electricity, internet, and yeah, my personal, you know, this isn’t Brookings view, but my view is that AI will, it’ll do some good, you know, there’ll be benefits. I just don’t see it right now living up to the hype unless lots and lots of things change.
Will Brehm 28:22
So by way of conclusion, I want to just bring up these three pillars that the report puts forward as kind of, you know, overarching ideas for recommendations and, you know, there’s all sorts of recommendations that we probably won’t be able to get to today, but I’d love to just end by talking about these three pillars. These are prosper, prepare, and protect. How do you see these ideas helping us think about the future of AI and education?
Mary Burns 28:46
So I, you know, these pillars really came out of the research from interviews, you know, people speak very hopefully of AI helping students thrive, but also the need to prepare students and teachers to use it ethically, responsibility, the need to protect the students from its harms. I’ll say these pillars, you really encapsulate responsible and productive AI use. And so I think prospering is really focusing on how students can use AI, leverage it to enhance their learning, their creativity, their academic growth, and, you know, essentially opportunities for flourishing. The prepare is really addresses the competencies, the literacies, critical capacities that students need to develop, really navigate in an AI saturated world, especially effectively and ethically. And protect really encompasses the safeguards, the boundaries, the awareness really necessary to mitigate the risks ranging from, you know, violations of data privacy to over-reliance and the social de-skilling, the cognitive de-skilling to privacy concerns and misinformation.
So I mean, all of these together, and I think this is the most important part of the report, these pillars just acknowledge that responsible AI integration requires simultaneously maximizing benefits, building capacity and minimizing harm and a really balanced approach that moves beyond the techno-optimism I think we’ve been talking about and toward much more, I guess you used this term earlier, techno-realism.
Will Brehm 30:17
Well, Mary Burns, thank you so much for joining FreshEd. Really a pleasure to talk today and congratulations on this new report.
Mary Burns 30:23
Thank you, Will. I’ve really enjoyed this.
Want to help translate this show? Please contact info@freshedpodcast.com
Related Guest Publications/Projects
A New Direction for Students in an AI World: Prosper, Prepare, Protect
AI’s Future for Students Is in Our Hands
What the Research Shows About Generative AI in Tutoring
The World Needs a ‘Premortem’ on Generative AI and Its Use in Education
Technology in Education: A Tool on Whose Terms?
Barriers and Supports for Technology Integration: Views from Teachers
What We Know About Educational Technology Effectiveness in Schools
Distance Education for Teacher Training: Modes, Models and Methods
Recommended
Critical Perspectives on AI in Education
Pausing AI Developments Isn’t Enough. We Need to Shut It All Down
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
The Alignment Problem: Machine Learning and Human Values
Technology and Cognition
The Shallows: What the Internet Is Doing to Our Brains
The Distracted Mind: Ancient Brains in a High-Tech World
The Extended Mind: The Power of Thinking Outside the Brain
Critical Technology Studies
Technopoly: The Surrender of Culture to Technology
Understanding Media: The Extensions of Man
Educational Technology – Critical Perspectives
Failure to Disrupt: Why Technology Alone Can’t Transform Education
Audrey Watters: Hack Education
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
Data Privacy and Student Safety
Privacy and Student Data: Examining the Policies and Practices of K-12 Schools and EdTech Companies (Future of Privacy Forum)
Student Privacy 101 (U.S. Department of Education)
Common Sense Privacy Evaluation Initiative (Common Sense Media)
Children’s Privacy and Technology (Federal Trade Commission resources)
AI Policy and Governance
EU Artificial Intelligence Act
UNESCO Recommendation on the Ethics of Artificial Intelligence
Have any useful resources related to this show? Please send them to info@freshedpodcast.com



