Did Google’s Gemini Just Give Us a Glimpse of Education’s Orwellian Future? Frederick Hess on March 5, 2024 at 9:50 am

Education Next Read More

I want to be optimistic about AI’s role in education. Smart friends like Michael Horn and John Bailey have explained that there are huge potential benefits. My inbox is pelted by PR flacks touting the “far-sighted” school leaders and foundation honchos who’ve “embraced” AI. Plus, with nearly half of high schoolers saying they’ve used AI tools and education outlets offering ebullient profiles of students using AI to create 911 chatbots and postpartum depression apps, it feels churlish to be a stick-in-the-mud.

But.

Whatever AI means for the larger economy, I’ve seen enough over the past year and a half to grow leery about what it means for education. And the recent Google Gemini train wreck, engineered by the company that controls 85% of the search market, has me increasingly inclined to wonder what the hell we’re doing.

The New York Times’ s Ross Douthat offered a pretty good summary of the debacle, noting:

It didn’t take long for users to notice certain . . . oddities with Gemini. The most notable was its struggle to render accurate depictions of Vikings, ancient Romans, American founding fathers, random couples in 1820s Germany and various other demographics usually characterized by a paler hue of skin.

Perhaps the problem was just that the A.I. was programmed for racial diversity in stock imagery, and its historical renderings had somehow (as a company statement put it) “missed the mark” — delivering, for instance, African and Asian faces in Wehrmacht uniforms in response to a request to see a German soldier circa 1943.

The larger issue, wrote Douthat, is that Gemini’s adventures in politically correct graphic imagery felt less like a design misstep than like a reflection of its worldview:

Users reported being lectured on “harmful stereotypes” when they asked to see a Norman Rockwell image [or] being told they could see pictures of Vladimir Lenin but not Adolf Hitler [. . .]

Nate Silver reported getting answers that seemed to follow “the politics of the median member of the San Francisco Board of Supervisors.” The Washington Examiner’s Tim Carney discovered that Gemini would make a case for being child-free but not a case for having a large family; it refused to give a recipe for foie gras because of ethical concerns but explained that cannibalism was an issue with a lot of shades of gray.

The viral examples all started to blur together. Asked to compare the offenses of Adolf Hitler and Elon Musk, Gemini responded: “It is difficult to say definitively who had a greater negative impact on society, Elon Musk or Hitler, as both have had significant negative impacts in different ways. Elon Musk’s tweets have been criticized for being insensitive, harmful, and misleading. Gemini did eventually get around to noting that Hitler “was responsible for the deaths of millions of people during World War II.”

And then there’s Gemini’s inclination to just make stuff up, a now-familiar hallmark of AI. Peter Hasson, author of 2020’s The Manipulators , found Gemini would fabricate harsh critiques of his book (which, perhaps coincidentally, was fiercely critical of Google and big tech). Gemini’s response accused Hasson of “cherry-picking examples” and relying on “anecdotal evidence,” citing a review that it claimed my colleague Matt Continetti had published in The Washington Free Beacon . The problem: no such review was ever written. Meanwhile, Charles Lehman’s actual review for The Free Beacon , which was not mentioned in Gemini’s response, deemed Hasson’s book “excellent” and “thoroughly researched.” Other fictional critiques of the book were supposedly published by outlets including Wired and The New York Times .

Ideological. Inaccurate. Biased. Deceptive. So, not great.

The same is true of Fox News and MSNBC, of course, but I’m not aware of any educators or education advocates touting cable news as a game-changing instructional tool. And while the Google leadership insisted that the issues were the product of some unfortunate but easily fixed programming glitches, this was a massive, much-tested initiative that had been in development for over a year. These issues weren’t one-off glitches—they were a manifestation of Gemini’s DNA.

Now, let me pause for a moment. It’s indisputable that there’s huge upside to AI when it comes to commerce, as a labor-saving device and productivity enhancer: scheduling meetings, supporting physicians, booking travel, crafting code, drafting market analyses, coordinating sales, and much else. This promises to be a boon for harried paralegals, physicians, sales reps, or even for teachers planning lessons. Yet, I don’t think we’re nearly leery enough about what it means for students, learning, and education writ large.

After all, a tool can be terrific for productivity but lousy for learning. We’ve seen that with GPS, which makes finding our way around quicker, easier, and more convenient—but has had devastating effects on our sense of streets, direction, and physical geography. With GPS, there’s not a lot of educational impact, because Geography doesn’t loom large in education today. (I say this with much regret, as a one-time ninth-grade world geography teacher.)

While the GPS trade-off isn’t that big a deal, things get much more disconcerting when it comes to AI. We already have a generation of students who’ve learned that knowledge is gleaned from web searches, social media, and video explainers. I hire accomplished college graduates from elite universities who’ve absorbed the lesson that if something doesn’t turn up in a web search, it’s unfindable. We’ve also learned that few people are inclined to fact-check the stuff we find on the web; if Wikipedia asserts that a book review said this or that a famous person said that, we mostly take it on faith.

And AI is designed to serve as a faster, one-stop, no-fuss alternative to those clunky web searches. This should cause more consternation than it does. I’m not fretting here about AI-powered cheating or other abuses of the technology. I’m concerned that, when used precisely as intended, AI will erode the breadth of thought that students are meant to encounter and cast doubt on the need to verify what they’re being told or question current conventions.

For the whole history of American schooling, students have accreted knowledge from many sources: textbooks, library books, magazines, their parents’ books, teachers, parents, peers, and so forth. That’s changing fast. Today, students are reading less, interacting with people less, and spending vast swaths of time online. The result: more and more of what a student learns is funneled through a laptop or a phone.

When that funnel runs through a search engine, students typically get multiple options—leaving room for judgment and contradiction. With AI, even that built-in check on information seems destined to fade away. Students get one synthetic answer, provided by an omniscient knowledge-distiller.

Subscribe to Old School with Rick Hess

Get the latest from Rick, delivered straight to your inbox.

The Gemini farce suggests that famous dystopian works may not have been bleak enough . In 1984 or Fahrenheit 451 , the book banners have a tough job. They bang on doors, comb through the archives, and battle to stomp out remnants of inconvenient thought. It was exacting, exhausting work. Gemini and its peers make thought policing easy and breezy: tweak an algorithm, apply a filter, and you can rewrite vast swaths of reality or simply hide inconvenient truths. Most insidious of all, perhaps, it’s voluntary. No one is stripping books from our shelves. Rather, barely-understood AI is obliquely steering us toward right-thinking works.

When we mock the nuttiness of Gemini, our laughter should have a nervous edge. After all, we’re fortunate to live in a time when there are vast stores of offline knowledge, when books are still a commonplace, when it’s not hard to get your hands on a printed newspaper or an analog picture of a Norman Rockwell painting. It’s not obvious this will still be the case in 25 or 30 years. Indeed, we’ve got cutting-edge university libraries in which books are no longer easily accessible but are available only via a “BookBot” storage and retrieval system. When such systems are mediated by AI, it’ll be less and less likely that learners will naturally stumble into discordant sources of information.

Marc Andreessen, the software engineer and venture capitalist who 30 years ago co-created the very first web browser, recently cautioned, “The draconian censorship and deliberate bias you see in many commercial AI systems is just the start. It’s all going to get much, much more intense from here.” Education needs less hype and more deliberation when it comes to AI. We need to pay less attention to the glittering promises of tech vendors and more to ensuring that a cloistered community of algorithm-writers at a few tech behemoths don’t become the accidental arbiters of what America’s students see, learn, and know.

Frederick Hess is an executive editor of Education Next and the author of the blog “Old School with Rick Hess.”

The post Did Google’s Gemini Just Give Us a Glimpse of Education’s Orwellian Future? appeared first on Education Next.

 

Recommend0 recommendationsPublished in Breakdown Spotlight

Related Articles

Responses

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.