Guest post by Nathaniel Rivers, English, Saint Louis University
My syllabus statement on generative AI (as opposed to AI writ large) remains a work in process as it responds to an emerging phenomenon. Although, “phenomenon” is the wrong word. “Phenomenon” treats generative AI as some kind of natural or even inevitable thing—as if it’s something that’s arrived out of the blue like a storm. It is instead a project: it’s designed, packaged, and promoted by discernible agents with identifiable goals.
What I want to do here is to annotate, as it were, my statement—the sources I am drawing on and the thinking it is grounded in.
One headline reads “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” Another “ChatGPT consumes 17,000 times more energy than an average US household.” Hannah Story Brown argues, “AI is rapidly cementing itself as a “disaster multiplier.” The consequences of its unchecked expansion range from exacerbating water shortages and increasing pollution to emboldening the surveillance state and spreading misinformation” (“Chatbot, Are We Cooked?”). Given the material realities endemic to generative AI, my own response is rather sour.
This opening gambit reflects my growing frustration with the conversation around generative AI in Higher Ed. We quickly jump over or, indeed, bracket off these material concerns. For me (and I as address below), the environmental, economic, and political problems with AI far outweigh those related to plagiarism or academic integrity. In other words, our not unreasonable worries about students deploying generative AI to violate academic integrity strikes me as something of a red herring—if not misdirection. If we learned that a student had exploited and mistreated their roommate in order to cheat on an exam, I suggest that our first concern should be for the wellbeing of their roommate. What are we not worried about when we worry about plagiarism?
The consequences of the current iteration of AI (for there have been others) are far reaching yet far beyond the grasp of most of us to shape. That power is in the hands of an unaccountable few. Any statement on generative AI must start with this recognition.
I’d argue that we need consider this larger, worldly context because we are participating in generative AI regardless of whether or not we personally use it. Generative AI is not simply a tool we (as teachers, researchers, students) can either take up or not; it’s an all-encompassing system that implicates us all. (Non-smokers in a smoke-filled bar are, in fact, smoking.) Those who use it necessarily fold us all into it—vacuuming up our words, siphoning off our water, and burning through our kilowatts. When we think and talk about generative AI, we must trace to where and to whom this power flows. Even its proponents acknowledge, indeed celebrate, the power of generative AI to radically change the world. Yet, no one has asked us (whoever this “us” might be) whether we want it or not. (And I am unconvinced by the rather sophomoric assertion that individuals can either use or it not.)
I also think it is essential that we attend to the specific capacities and effects of generative AI. It isn’t a at like a calculator, for instance, which is a comparison many proponents have made. No one has yet to make a calculator capable of encouraging a teenager to commit suicide. Furthermore, this argument strikes me as entirely too resonant with the spurious logic of an organization like the National Rifle Association: people have always used weapons to harm others, and in that regard an automatic weapon is no different than a knife. To which we might respond, “then why do I need a gun?” Which is to say, many objections to generative AI have to do precisely with it in it’s particularity—not because it’s some newfangled thing that frightens them, but because of what it, specifically, does.
When we think as a university community about taking up and using generative AI, we must acknowledge its costs and how its adaptation reorients us to our work and to one another. And this is a conversation we need to have (or should have had) before we use it. I find shallow the conversation about “how to ethically engage generative AI” insofar as that particular conversation has already asked and answered the question of whether or not. The primary concerns often raised about generative AI (environment, intellectual property, etc.) are not concerns that arise in the use of generative AI; they are built into the system’s production. There’s no way to engage generative AI that retroactively changes how it was built. To use generative AI in the first place is to have already addressed those concerns—and to have found them, it seems, not all that concerning.
In terms of teaching, my response is something like gloom—gloomy for the missed opportunities to learn, to grow, and to become otherwise through writing. To write is to experience joy and pleasure (often born of struggle and frustration). To write is to both discover and invent ourselves. For this reason, the use of Generative AI in this class is prohibited.
The second part of the statement shifts to the scene of the class itself. I spent a lot of time settling on the word “gloom” here. I think it works to capture that I am not, say, offended by students using AI or even that it angers me in some way. I understand the incentive structure that leads some students to use it. I understand that students live in a world and attend a university that both treat generative AI as a fait accompli, and that they are terrified of being left behind by an increasingly cruel and cutthroat world. But I also know that using generative AI robs them of experiences and practices that I find valuable. (In this way I am the parent who earnestly claims, “I’m not mad; I’m just disappointed.”) I also know that evidence is already suggesting that the use of generative AI degrades certain cognitive capacities. Preliminary results of research conducted at MIT strongly suggests that the use of generative AI to write subsequently degrades the users ability to write. It’s not simply that students who use generative AI miss opportunities to improve their writing; generative AI actively harms their ability to write in the future.
The act of writing is valuable, which is why this course emphasizes the process of writing rather than only the final product. At this time, however, AI writing is generally undetectable by so-called detection tools. Evidence suggests that even experienced teachers aren’t much better at detecting it and often reward AI writing with higher scores.
For every detection tool that appears to work, there’s another tool that works around it. Pangram is a promising detection tool. It will detect writing that I know to be generated by AI. I take that now-detected writing over to TwainGPT and use the site’s “humanizer” tool. I bring the freshly “humanized” text back to Pangram, which now reads the writing as 100% human generated. And none of this work, mind, has anything to do with the teaching of writing. Detection (or at least actionable detection with respect to academic integrity) strikes me as a fool’s errand. And, furthermore, I didn’t get into teaching to ascertain the provenance of what students submit to me.
But for me this all is beside the point. College is a non-compulsory educational experience. The carceral work of detection and punishment is uninteresting to me. There is only pain there, and an antagonistic relationship between teachers and students that is hostile to learning.
Were we able to reliably detect the use of generative AI, it’s all we would do, but the goal of teaching is not to be able to prove that a student did the work themselves. Ours is, borrowing from the four Jesuit Universal Apostolic Preferences, to accompany students as they do the work of learning. What makes me gloomy about generative AI is the existential threat it poses to teacher-student relationship by degrading trust upon which that relationship rests. Imagining that we can (and should be able to) detect when students use generative AI is to render them as something other than students. I am not interested in an arms race with my students.
There should be joy to find and to have in a writing class. And joy to share with each other. Writing we might want to do ourselves.
Having worked through the material, worldly consequences of generative AI and then laid out its pedagogical implications, I wanted to end on an affirming note. Not simply that generative AI is deeply troubling, which it demonstrably is, but that there are better things beside it. That is, by all means resist generative AI because it damages the planet, dehumanizes labor, and concentrates power, but also don’t use it because it’s far less enjoyable than doing it yourself, with others.