by Christopher Hall

I did something a little odd this past semester: I had my students write an assignment where I instructed them to use AI as much as they wanted. The assignment was to produce a cover letter and a resume to respond to a job ad I had given them; normally, for any given assignment, I’d give them parameters for how AI was to be used, either not at all or under guidance and supervision. But here, I took the reins off.
The reasoning was pretty simple: those reins, held by a communications professor hoping these Gen Z students might be directed to engage the engines of generative thinking that remain in their minds, wouldn’t be there at all when it came time to apply for their entry-level jobs in a few years’ time. So why not have them produce the very best product possible, which some of them could only do with the help of AI, and have their professor look over it to make it sure it wasn’t hallucinated garbage?
Not doing so, making them write what would likely be inferior documents just so I could criticize them, seemed backwards and a little cruel, like dragging a lot of abacuses into a math classroom and making the students do algebra just for the sake of, you know, doing it old school.
So that was my covering rationale; the real rationale is that I’ve been trying to covert all of my assignments over multiple courses to meet with the reality of AI, with marginal success, and I ran out of steam here. I had thought that, perhaps, I’d get them to produce something in class, have other students critique it in a kind of workshop atmosphere – I didn’t have a really clear idea of what I wanted to do at the beginning of the term, and predictably as time ran short I went for what was essentially the “default” option. And, seriously – writing a draft cover letter on the spot, in class, getting peer feedback, revising, getting more feedback, etc. – was I trying to replicate a process that is simply outdated? As I’m increasingly finding, the process of getting students to stop relying on artificial intelligence means, paradoxically, introducing a level of artificiality, a distance from practicality, into the classroom.
Some of my colleagues are making students handwrite some assignments directly in the classroom; I fully understand the impulse behind this, but I’m nevertheless resistant to it. For one thing, my own handwriting looks like it was produced by a racoon with a nervous disorder, so I sympathize with any student who may suffer a similarly embarrassing deficiency. But I also wonder how far into the analog past we may be tempted to go. There are times during the many meetings and discussions I’ve been in concerning AI and its impact on college learning that I’ve imagined myself part of a traditionalist enclave in some post-apocalyptic scenario where the technological corruption of The Outside is to be resisted by strict abjuration. I’m not sure I’m clever enough to be able to teach cover letters through Socratic dialogue.
But of course, I’m not asking my students to produce essays or research papers or really anything that depends on the hard and constant engagement of creative and intellectual faculties; my job is primarily to give them the tools they’ll likely need to produce good written communication in a professional environment. And it must be admitted that a lot of that communication is basically going to be routine. If, for a moment, we can drive from our minds the ugly fact that AI is being developed under the auspices of a predatory capitalism that is ensuring its use will be directed towards the enrichment of a tiny few and the continued immiseration of the rest, we’ll realise that this is precisely what AI ought to be good for. The dream of technology is that in automating the routine we free time for more complicated and hopefully fulfilling tasks. If it hasn’t exactly worked out that way – automation in practice just makes more room for extra exploitation – then we can still acknowledge that drudge work isn’t doing anyone any good. Washing dishes did not become a valuable exercise in human capability when dishwashers came around.
The outcome of my assignment was telling. The cover letters and resumes I got were uniformly excellent – my grades for the assignment were absurdly high. They were also uniformly – well, uniform. What I got was roughly a hundred replications of what one could call the perfectly decent and perfectly average cover letter and resume, only with the particular details of each student’s experience swapped in. But there’s nothing wrong with this; these are highly formulaic documents. I had in fact stressed to my students that it was probable that a human would never read their documents; Applicant Tracking Systems would instead masticate what they had written down to the constituent keywords the job was looking for. Idiosyncrasies, or personal touches, might cause a glitch in the system terminal to their application. So, I had my students machine produce documents which were to be read by other machines. Despite knowing that this might all be useful to their future employment prospects, my mind keeps reaching back to that moment in Catch-22 where nurses rotate the fluids going into and out of a patient’s body.
Again, I’m not particularly concerned that my students are emerging as somewhat flattened entities in their job application documents. I do very much believe in the core value of having an individual voice – and, indeed, in the general cultivation of robust and complex individuality – but even as I deeply lament the loss of this as a cultural value, I’m not sure that’s entirely within my teaching remit. (Perhaps it should be within every teacher’s remit, but I am trying to be practical here.) I abhor mechanisation, but I must teach utility and consistency, things a machine does very well. Anyone involved in communication, especially in any area where the volume, clarity and replicability of communication are prioritized, must be concerned for their future relevance – and employment. Due to circumstances entirely outside of the influence of AI, colleges in Ontario are not in the healthiest spot at the moment, and so the urgency of the situation is made all the more dramatic. My chief concerns are not whether AI will one day write a novel better than Ulysses or whether Claude (or Claudia) is conscious – both very interesting questions, but not at all germane to my immediate concerns. What I’m wondering, simply put, is whether or not I’m the one holding the abacus.
I did something else this semester which I don’t usually do. We get formal student evaluations, of course, but I wanted something a little more direct. So, I had the students answer a couple of questions in a short assignment. The first question was what they found most useful and least useful to learn in their communication courses. Least useful was grammar – not entirely shocking there. But somewhat to my surprise – and this may have been the product of the fact that this wasn’t an anonymous assignment and they didn’t want to flag their communication professor’s future (or current!) irrelevance – they also thought that learning the rhetorical forms of documents, of the right words being in the right place, was valuable, even though that’s precisely the sort of thing AI does well. They want to know, in short, what the machine is doing, even if they are letting the machine do it. They are not content to allow the components of even mundane communication become some kind of mysterious ritual only their devices understand.
The other question I asked was how important they thought communication skills would be in their future employment. I wasn’t entirely surprised to see that they said it would be, but their focus was interesting. They were primarily concerned with interpersonal communication – not with writing a memo, but with the daily negotiation with other people that makes up, of course, the core of many of our working lives. I do fear – and I think my students also fear – that we may all get to the point in communication where the muscle memory to deal with any complex situation might atrophy as we assign AI to complete more and more tasks for us. And I, and they, fear that we have been primed for this all the more by decades immersed in the constant noise of social media, the paresthesia enacted on our nervous systems by incessant short jolts of entertaining irrelevances. I might simply start by saying that, if we ever get to the point (and I know we’re already at that point) where we’re getting Claude to fire our employees or express sympathy for a recent loss, we’ve lost something fundamental, not because Claude is necessarily going to do a bad job of it, but because whatever mechanisms of empathy which are the foundational parts of communication now lie trapped within us, unused and desiccating.
***
Enjoying the content on 3QD? Help keep us going by donating now.
