AI Will Not ‘Solve’ Education. Here’s Why
(TNS) — Two things are certain about the emergence of new technologies. One is that it will be accompanied by a host of world-saving missions. The second is that one of those missions will be “solving” education.
Indeed, the more advanced the technology, the more seductive the idea that it can educate us. But history has shown the opposite: the more advanced the technology, the further it substitutes genuine intellectual labor, the less we learn.
AI is a much bigger threat to education than prior technologies. By pushing new frontiers in automation, seeding the notion that we might take humans out of the learning loop entirely, and suggesting that education could be cheap when it should be our most significant investment, AI may not just perpetuate but accelerate the ongoing challenges in our children’s educations.
To understand why, consider the technologist manifesto Why AI Will Save the World published last year by venture capitalist Marc Andreesen. In it, Andreesen lays out a utopian vision of human flourishing made possible by artificial intelligence. And in his list of roles that AI could play, the top position goes not to a godlike superintelligence discovering new physics and cures for cancer, for example, but something seemingly much more humble: an AI tutor. He writes:
“It’s a compelling vision. Who can argue against more and better — infinite, omniscient — education? Who would wish for anything but for every child to maximize their potential? Solve education, the problem of problem-solving, and you solve everything else. Just imagine unlocking all the underutilized geniuses of the globe. Dial up the light cone. Let bloom 10 billion Einsteins; and herald them, 10 billion Mozarts.”
But the pursuit of utopian visions tends to produce their opposites, and AI in education is no different. Indeed, as a technology meant to substitute human intellectual labor and attention, artificial intelligence is anathema to genuine learning. In June, one experiment that deployed a “friendly” chatbot in Los Angeles schools failed. More on that below.
Underlying both the apparent promise of artificial intelligence technology and its inevitable failure are two myths. The first is that you can substitute the labor of the student. The second is that you can substitute the labor of a human teacher. Let’s take them one at a time.
AI CAN’T SUBSTITUTE A STUDENT’S OWN WORK
The early AI frenzy in education technology produced a host of products designed to assist the student. Quizlet, for example, a launch partner of OpenAI, announced AI that could help students make their flashcards. Startups like Trellis offered magical textbooks that could tailor any material to any grade level and summarize difficult concepts. Khan Academy’s Khanmigo bot can simulate characters from novels so that students can talk directly to them instead of having to interpret the primary source. And of course, ChatGPT quickly became a favorite tool for students looking to avoid the difficult work of writing.
It’s difficult to argue against the goal of making students’ lives easier. Unfortunately, all the evidence on learning suggests that such supports come with a steep cost: they often hinder learning. Without intellectual challenge, there is no intellectual change. This isn’t just true on the level of learning philosophy. It’s true on the level of neurobiology. As one paper puts it, “Sustained cognitive challenges are required to elicit lasting neural changes that underlie enhancement of a general cognitive function.”
Likewise, Peter Brown’s 2014 book Make It Stick summarizes the science of learning to find the best study methods, including pre-testing, quizzing, and spaced retrieval practice. What do all the recommended methods have in common? They’re hard.
We have a word for when students take shortcuts in their work. It’s cheating.
AI CAN’T SUBSTITUTE A HUMAN TEACHER’S WORK
The second educational myth fueled by AI is that it can solve our teacher shortage. When Andreessen dreams of an AI tutor for every child, he dreams not only of a magical teacher but of a 1:1 student-to-teacher ratio for the globe.
This myth rests on the assumption that there’s an effective substitute for individualized human attention. In this light, the vision riding on AI’s potential is not new but rather the latest in a long line of ideas of how to “scale” the scarce resource that is the attention of an expert educator. Especially here in the United States. The history of the American educational system is the history of industrialization, increasing leverage, and lowering the cost per student — from tutoring to small classrooms to large classrooms to online courses. And with AI tutors, potentially replacing the human outright.
These attempts to scale education fly in the face of all we know about learning — that learning is social, and that the more human attention, the better the outcomes.
Psychologist Benjamin Bloom, in a 1984 paper, famously posed the 2-sigma problem, his term for the common sense but vexing fact that individual tutoring is the best educational paradigm. But why? While the tutoring setting gets all the attention, even Bloom referred to the influence of the home environment and peer group.
He also noted that tutoring improved “attitude” and “academic self-concept” along with student achievement. An educator himself, Bloom recognized the social component of tutoring and learning generally. One doubts he would be enthused about replacing human teachers with machine intelligence, no matter the cost advantages.
So might an AI tutor produce the same positive psychological changes in its students that effective human teachers do? Early evidence is far from promising. As mentioned earlier, software company AllHere, creators of an AI “educational friend” called Ed that was promised to be emotionally responsive, “capture and captive [students’] attention,” and “be the motivator” for half a million students in Los Angeles public schools, announced extreme furloughs of its staff and suspension of services.
Against the backdrop of an unassailable mission, millions in funding, and near-complete business collapse, there are essentially no signs that the AI chatbot came close to supporting students’ psychological needs in their schooling. According to the district superintendent, one seventh-grade girl has said, “I think Ed likes me.”
Beyond tutoring, it’s not difficult to imagine how a classroom that is warm, trusting, and human would outperform one that is cold, competitive, and mechanical. But there is also no need to. A large and growing body of research on social and emotional learning (“SEL”) demonstrates its efficacy across domains of academic achievement.
A 2023 meta-analysis conducted by Yale School of Medicine, covering 424 experimental studies, found that students who participated in SEL programs demonstrated improvements across essentially all facets of academic life: more achievement, more engagement, improved mental health, and closer relationships with peers and teachers. Perhaps the most promising recent advance in educational programming, in other words, is the exact opposite of having students learn alone with their devices.
Notably, while “techno-optimists” like Marc Andreesen push for technology to solve more and more of humans’ needs — including the care of and cultural transmission to the next generation—schools, that experienced firsthand how more technological intermediation disrupted learning during the pandemic, are doing the opposite.
Just last month, the Los Angeles Unified School District, the second largest in the nation, banned smartphone use in the classroom. New York State is on the verge of doing the same. We might wonder whose vision to lend credence: venture capitalist bloggers or the teachers and other faculty who have devoted their lives to education?
Fast Company © 2024 Mansueto Ventures, LLC. Distributed by Tribune Content Agency, LLC.
link