Stats
  • Total Posts: 12607
  • Total Topics: 5002
  • Online Today: 171
  • Online Ever: 816
  • (September 28, 2024, 09:49:53 PM)

WAPO: In the age of AI, colleges need to rethink how students learn

  • 0 Replies
  • 34 Views
*

Offline droidrage

  • *****
  • 5259
  • 7
  • I Am Imortem Joe
    • Underground Music Companion
In the age of AI, colleges need to rethink how students learn

Large language models are changing higher education. These authors show that that doesn’t have to be a bad thing.





When John Warner was in third grade, his teacher told the class to “write a list of instructions for making a peanut butter and jelly sandwich.” Once they were done, she handed out everything they might need to make one and instructed the students to follow the directions to the absolute letter. “This is how,” Warner writes, “I found myself knuckles-deep in a jar of Centrella-brand creamy peanut butter.” He had, it turns out, not written anything about using a knife. In “More Than Words: How to Think About Writing in the Age of AI,” Warner uses this anecdote to illustrate that the act of writing is not about the production of words, but is, rather, a complicated and deeply human process that involves a relationship between thought, memory, intention and language. ChatGPT and other large language automation models, or LLMs — erroneously sometimes called AI — can produce words but can’t do anything else.

The peanut butter incident isn’t the only example from elementary school that Warner, a career college writing instructor, uses to talk about writing and technology. Technological change has long impacted the way we produce language, Warner says, but it didn’t alter the fact that human thought remained at the core of that production. Warner had, by his own admission, terrible handwriting, and it was only his introduction to the typewriter in the summer after fifth grade that allowed him to “capture [his] thoughts at close to the speed with which they occurred.” I can relate. When I was a child, dyslexia left me with trouble forming clear letters by hand, but with the advent of keyboards, I slowly built a new relationship with writing. Decades later, in my late 30s, I embraced speech transcription apps. For me, that was the true technological breakthrough, and in fact I drafted this review speaking into my phone and letting the program capture it. But the words you’re reading now still convey my thinking. The technology doesn’t break the chain between thought and word. ChatGPT, Warner argues, does something else entirely, and that something isn’t writing.

Warner admits that LLMs can produce something that seems like writing, including fully acceptable college composition essays, and can do so at a remarkable pace. But the mere generation of language, he argues, is not writing. Despite that, he writes: “I see ChatGPT as an ally. If ChatGPT can do something, then that thing probably doesn’t need to be done by a human being. It quite possibly doesn’t need to be done period.”

Warner is looking to articulate what humans do when we write, and why it matters. In his earlier work, he demolished the utility of the five-paragraph essay, whether as a form of writing or as a mere writing tool. If LLMs can churn out pretty good five-paragraph essays, that’s just a sign that they aren’t worth writing. And even when the models turn out something passable — at one point Warner has ChatGPT write “a Shakespearean sonnet about the beauty of a Chevrolet Corvette” — the results are just the result of “putting things together in the imitation of a style,” following rules, divorced from actual thought and just based on probability. ChatGPT can describe a child’s bedroom but can’t describe Warner’s childhood bedroom, because that process starts in his mind.

Human writing is “spiky, weird, and messy,” especially when we use writing as a way to figure things out. Classroom writing should reflect this messiness, shifting from outcome — formulaic essays that students can, if they want to cheat, produce without doing any actual work — to process. He prescribes beginning not with rote writing tasks, but by building appreciation for language before asking students to write the kinds of things we really have to write — for example, recommendations of whether a given person might like a film, a process that shows “no two writers will produce the same piece, even if they are working from similar material.” In reflection, Warner then asks his students to talk about their process, an assignment immune to being “outsourced to ChatGPT, because [it] has no knowledge of the students’ thoughts.” All of this is intended to teach the student to equate writing with thinking, not putting words on a page.

Warner’s prescription that teachers should focus on the thing that humans can do, and do those things as much as possible, finds a compelling echo in “Hacking College: Why the Major Doesn’t Matter — and What Really Does,” by Ned Scott Laff and Scott Carlson. The authors, a veteran college adviser and a reporter for the Chronicle of Higher Education, deploy compelling anecdotes of students (mostly from marginalized backgrounds and attending nonelite schools) finding pathways through college into productive and rewarding lives by pursuing goals that emerge from their genuine concerns. In each case, the students find a way to combine those preexisting interests with their coursework — often through a conversation with a counselor like Laff or a peer mentor, but sometimes just through good luck in the aftermath of a crisis — to work on a “wicked problem” in modern society.

Instead, too much of college advice pushes the simplistic mantra that “major equals job,” and that’s just not true. One of the students profiled by Laff and Carlson, for example, got to college in the hopes of “spreading the word,” but instead of someone asking her what that meant and getting to know her, she was tracked into marketing. Ultimately, thankfully, she ended up in the humanities, in Spanish, combining that program with distinct classes in social media, public relations (communications), and a little marketing, to help her move into professional life.

I am a college adviser, and my colleagues and I don’t make out that well in the book. Because I’m located in a history department, a discipline I know intimately, I like to think that my conversations with students are carefully structured to help them articulate their interests, help them see where those interests show up in our curriculum, and lead them toward a wide variety of professional futures (mostly in jobs that don’t have “history” in the title). But reading this book convinced me that I need to refine my approach to ask more questions and offer fewer answers, at least until the students I work with have told me a lot more about themselves. As the book says, the most important moments in my decades in higher ed have come “when students really open up and talk to people on campus about their curiosities and goals.”

At one point, Laff and Carlson mention that schools are beginning to roll out ChatGPT advisers, which they argue is entirely the wrong method. A program might be able to tell you if a class counts toward your requirements (though given the errors I see daily in our automated system, maybe not even that), but it can’t do the human work — the spiky, weird, messy processes, to borrow from Warner — that really matter. And that’s where these two books come together most clearly: There are no automations, top-down systems, simple answers that will fix what ails higher ed. But the solutions are already here, in the people working hard to teach and to learn.