by Akim Reinhardt
A little over a year ago I published an essay here at 3QD that implored my fellow educators not to panic amid the dawning of Artificial Intelligence. Since then I’ve had two and a half semesters to consider what it all means. That first semester, many of my students had not even heard of AI. By the very next semester, a shocking number of them were tempted to have it research and write for them.
Many of my earlier observations about how to avoid AI plagiarism still hold: an ounce of prevention is worth a pound of cure; good policies and clear communication from the jump are vital; assignments such as in-class writing and oral exams are foolproof inoculators.
However, other, more abstract questions with profound pedagogical implications are emerging. These can be put under the larger canopy of: What am I teaching them and why?
Us Historians specifically, and Liberal Artists more generally, help students develop certain skill sets. We train them in the Humanities and Social Sciences, teaching them to find or develop data and use it effectively through critical and creative thinking. Obviously a political scientist and a continental philosopher go about this differently. However, the venn diagram of their techniques and goals probably overlaps a fair bit more than a lay person might realize. For starters, we all have the same broad subject matter. Everyone in the Liberal Arts, from art historians and literature profs to psychologists and economists, studies some aspect of the human condition. And while we each have our own angles of observation and methodologies, there are also substantial similarities among them. We all find or generate data (even if forms of data are different), analyze them, draw conclusions, and present our findings. And those presentations of findings, even when centered around quantitative data, include a narrative.
In other words, words.
All of us, in one way or another and to varying degrees, teach students to write. And that’s what Large Language Model Artificial Intelligence has me thinking about.
If last year I was concerned about stopping plagiarism, today I’m contemplating what “cheating” even means and how its definitions are likely to change in the next few years. And that in turn has led me to reconsider the common calculator.
Today, calculators are so readily accessible that you hardly think about where to find one should you need it. Your phone has one. Even my phone, which flips open and shut and has mechanical buttons, boasts a calculator. All your other computer devices have at least one as well, which is a nice reminder that “computer” literally means “computational device,” ie. “calculator.” And of course there are thousands of different calculators available online.
But if you’re my age or older, you remember when calculators were dedicated pieces of technology. Large by today’s standards, they had physical buttons, ran on batteries, and appeared alongside another innovation, the digital clock, both of which used new technology to produce the digital display of numbers. Two hands circling a round clock face (and the briefly popular flipclock) were now repackaged as luminous, linear font numbers.
At first, many math teachers and testers looked askance at calculators. They felt that the new devices, while useful for professionals, were an impediment to arithmetical education. Students, they insisted, needed to show their work. Do the computations by hand, with a pencil and paper, to prove that you know how to do it. By their standards, using a calculator was cheating.
But of course that all changed as calculators became cheap and readily available. When I took the SAT in 1984, calculators were forbidden. Here in the 21st century, they are as necessary as the no. 2 pencil a student uses to use to fill in the bubble sheets.
Oh, wait, pencils and bubble sheets for the SAT have gone the way of horses and buggies. But you not only get the point; the disappearing pencil actually reinforces it. Technologies change. That in turn leads to changes in how, and even what, we learn. And as a college instructor, it is up to me to figure out how to change along with it.
K-12 teachers are typically circumscribed by the fencing of state curricula. We professors are pedagogical wild horses, freely roaming the plains of knowledge. Not entirely, of course, but to a much larger degree than K-12 teachers, we can teach what and how we like. And with great freedom comes great responsibility.
How and what to teach in this new, unfolding era of Artificial Intelligence?
I am currently teaching, for the first time since the advent of free, open AI, a historical methods course. And thus, I am teaching for the first time since the advent of AI, the kind of course that cannot effectively employ the types of assignments that preclude cheating with AI. Students go home, conduct research, and write a paper. That’s the whole point of course. And so they can easily use AI to cheat.
I’ve tried to get out in front of it. For example, tonight (literally, tonight Monday, March 25th) in class we’re having AI write a paper on their topic. They’ll take it home and critique it. It gets them thinking about AI as a tool, and also lets them know I’m on the lookout for AI papers.
But unlike last year, cheating is no longer my main concern. Rather, it’s how this new technology will change my, and every professor’s, teaching in the years to come. It will alter what and how we will all teach.
If one things seems clear to me, it is that telling students not to use AI for their papers will soon be looked back upon as math teachers in the late 20th century telling their students not to use calculators. Fair enough. But dare I say it: Artificial Intelligence is a bit more impressive a tool than a calculator, even if they both run on the same basic digital technology. And so it has much greater implications for permanently changing pedagogies.
To be honest, a small part of me wishes I were ten years older (or that this technology were still ten years away), so I could retire without having to deal with it. But it’s just a small part. Mostly, this is exciting and I think I am up for the challenge.
I believe the core of the challenge is this: To what degree should we bother teaching students how to do things that technologies can easily and capably do for them? Ultimately, we are not be able to deny them their calculators. Nor should we. That would only hold them back, putting them at a disadvantage. During the early 20th century, when the United States was industrializing at a rapid pace, federal boarding schools were still teaching Native American boys to be blacksmiths. Let’s not.
At the same time, however, I believe we should not abandon teaching students how to do what the technology does, at least on a basic level. For example, a student should graduate elementary school knowing how to do addition, subtraction, multiplication, and division. Even if as adults they will mostly use calculators to figure out restaurant tips and do their taxes, learning basic arithmetic for themselves is still important. Without it, people are in danger of being innumerate. And that can lead to disastrous consequences in a modern world of credit cards, student loans, payday loans, and other forms of easy (and often predatory) credit that can ensnare people in a lifetime of debt.
Similarly, even if people in the near future will have AI do most of their writing for them, they still need to understand the basics of grammar, syntax, and sentence and paragraph construction. Just because we have spellcheck doesn’t mean you shouldn’t learn how to spell. Basic literacy, just like basic numeracy, will remain vital to succeeding in tomorrow’s society just like today’s. So while we may be very near the point when it will be acceptable and even standard behavior for students to use AI to write their drafts, they will still need to edit those drafts appropriately.
I’m not yet sure when that jump will come. When it will be standard operating procedure for college students to have AI do the bulk of their writing. It might still be a decade away, after my retirement. Though I’ve no doubt that some professors are already doing it. Personally, I’m not quite there yet, but I suspect I will move in that pedagogical direction sooner rather than later. Because I’m increasingly worried that I might no longer be teaching students the skills they need.
LLM AI already writes better than probably 90% of all humans. It certainly writes better than 90% of my students, which makes it easy to sniff out. It probably also researches much better than most people. And this is going to have profound consequences for the world of white collar labor that colleges train students for.
Last year I wrote about how to prevent cheating in the era of AI. Right now I’m musing about how to incorporate AI into teaching. Next year, I might be writing about looming 25% unemployment as AI makes many white collar jobs redundant. As with the internet thirty years ago, Fordism and Taylorism about 90 years before that, and the rise of steam engines about a century before that, artificial intelligence is going to remake the economy as we knew it. Unlike those earlier technologies and systems, it cuts to the very heart of what I do and what I train college students to do. Where do I and they go from here? We’re groping in the haze.
For if studying the past has taught me one thing, it’s that no one knows the future.
Akim Reinhardt’s website is ThePublicProfessor.com