Stats
  • Total Posts: 9597
  • Total Topics: 2588
  • Online Today: 267
  • Online Ever: 816
  • (September 28, 2024, 09:49:53 PM)

WAPO: AI is an existential threat to colleges. Can they adapt?

  • 0 Replies
  • 37 Views
*

Offline droidrage

  • *****
  • 3797
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
WAPO: AI is an existential threat to colleges. Can they adapt?
« on: October 26, 2024, 12:18:24 AM »
AI is an existential threat to colleges. Can they adapt?

Schools should worry about threats to education if students use artificial intelligence to cheat.





At dinner recently, I idly asked a junior at a top magnet school how many of his classmates use artificial intelligence to cheat. I was expecting the number to be high, but even so, I was shocked: He guessed it was about 60 percent. A subsequent poll of college professors and high school teachers I conducted on X generated similar results: Most thought at least 40 percent of their students were submitting at least some work from AI chatbots.

Those numbers are necessarily unscientific. We’ve never had an accurate count of the number of cheaters, because if cheating were reliably detectable, students wouldn’t bother. AI cheating is even harder to detect, because each essay is unique, rendering tools such as essay databases useless.

What public surveys we have differ somewhat: A paper from Stanford researchers last year suggested that cheating overall hadn’t increased since the advent of AI. (Though it also suggested there wasn’t that much room for increase: Roughly two-thirds of high school students reported cheating in some way.) However, a report released this year by Wiley concluded that “instructors and students both recognize a surge in cheating over the past year and are apprehensive that this trend could continue.”

Whichever you believe, there is reason for alarm either way, because this technology is in its infancy, and students will get better at it over time. AI cheating will most likely first become a problem at schools like the one my teenage dinner companion attends, filled with high-income students who have the most access to, and familiarity with, the emerging technology. If his estimate is even halfway correct, his classmates might be the bellwether of a coming cheating wave that will challenge all schools, particularly colleges, which might face an existential threat if they can’t adapt.

Does existential sound too grim? Well, okay, I’m not arguing that all colleges and universities will actually cease to exist. But the more rampant the use of AI chatbots becomes, the more it threatens the value of a college diploma as a signal to employers that you are diligent, smart and ready for white-collar employment. The less economic value a diploma provides, the less willing parents and taxpayers will be willing to spend helping students get one.

Many academics will, of course, bristle at the notion that the purpose of college is to provide a job credential. But practically speaking, that’s where the money comes from to pay professors’ salaries. Between 1929 and 2013, educational institutions’ share of gross domestic product quintupled, not because parents and taxpayers wanted students to “learn to think” or become better citizens, but because college graduates earn a hefty wage premium.

AI chatbots threaten that premium in two ways. First, it is radically devaluing many of the skills that colleges taught, such as (the journalist pauses in alarm) the ability to research a topic and turn those facts into competent prose. Schools are starting to talk about how to teach kids new skills that are becoming valuable, such as writing useful prompts for the chatbot, but it’s not clear that they’re best positioned for that task. If you were starting from scratch to make the population AI-literate, you would probably not choose an institution with its roots in the medieval era, nor one staffed by tenure-track professors, who have a median age of 49.

Of course, there’s never been a direct line between classwork and on-the-job skills. (My MBA program at the University of Chicago provided very little training on how to interview subjects or write a lead paragraph for a column.) But a college diploma also signaled that you were a certain kind of person: able to get into a selective college if you attended one, and smart and hardworking enough to complete four years of coursework. That’s where AI cheating becomes a problem. If employers were to suspect your professors were really grading a chatbot, they would discount the signal accordingly, and offer less of a wage premium for having a diploma.

That’s always been a problem to some extent, because cheating happened even in my dinosaur days. But AI is faster to use and harder to detect. In economic terms, the price of cheating has gone down, so we should expect it to happen increase, particularly as the technology improves, which it will. No one in the “words and ideas” business should comfort themselves that the current generation of AI models is too generic and prone to hallucination to produce truly excellent work; existing models are clunky and stupid compared with what they will become.

One can imagine two types of schools that are hardened against this risk. One leans into AI, requiring students to use it for everything — and consequently expecting them to do more and better work. The appeal of this approach is that students will become skilled at using a technology that’s making inroads into the workplace. The downside is that even if professors are sufficiently AI-savvy to teach this way (and many aren’t), students may miss some of the deeper learning that’s available only through old-school labor.

My first job as a writer was transcribing corporate earnings calls, work now done by a machine. It was tedious and ill-paid, but spending hours diligently listening as analysts sparred with CEOs gave me insights into the world of business that I’d never have gotten by skimming the transcripts, much less reading AI summaries.

So schools that want to impart those old-school skills might have to go in the other direction, as some professors are already doing, moving toward in-class exams written out longhand, oral presentations and graded class participation. But this approach, too, has downsides. Grading this kind of work is much more labor-intensive for professors, who are already stretched thin. Some skills, such as researching and writing a lengthy term paper, can’t be taught this way. And this approach doesn’t provide the new-school facility with AI that students will eventually need when they hit the workplace.

It’s an unattractive dilemma, and I don’t envy academics for the choice they face. I say only that their profession, like mine, will have to make them, because AI is coming, like it or not. And it will not help us cheat that reality.