ChatGPT and What Comes Next
Remember how your algebra teacher laughed at the idea you would ever “always have” a calculator?
“I just want to ensure that learning is actually taking place.” That was a line written by a colleague at the online college where we both teach. The online education world is in the middle of a vigorous conversation about ChatGPT and the like, and what it means for academic writing, particularly at the undergraduate level. It can get a little chippy, but this particular conversation became an excellent exploration of the topic where nobody lost their minds and started denouncing each other. That’s always nice to avoid.
I’ve been doing the “8-week online course” thing for about a year now, and the existence of AI writing bots has created an observable pattern among the instructors. Every term, a new handful of instructors comes on board. They grade the first batch of student writing assignments, and someone is flabbergasted by students’ quite obvious use of AI writing engines. So they take to the internal teacher forums to express their Serious Concerns About Academic Integrity™ and the debate flares anew. It wanes about halfway through a term, only to reignite when the next handful of new instructors wonders aloud why Nobody Is Doing Anything About This™.
The fundamental concern, I admit, is valid. It is reasonable to be troubled by students fobbing an assignment off on AI (or any other crutch-tool), and I agree that students need to not do that. However, I am circling around the notion that broadly denouncing AI misses the forest for the trees. I want to wander around in the premises that seem to underlie professorial handwringing over AI here, partly to have a place to point people so I don’t have to keep repeating myself, and partly as a bit of a rhetorical time capsule. I could be wrong, after all, and it will be interesting to see, however long from now, how I felt as AI was first crashing like a tidal wave over academia. Broad thesis up front: it’s going to be fine.
Now, into the weeds we go.
First, I feel compelled to mention that I have very open-minded, and borderline heretical, views on student plagiarism, and I want to explain that a bit as background on why I feel as I do about AI. In most online college programs (and mine is no exception) students are required to run their papers through a plagiarism-checking engine before turning them in. These have been around for a while now, and while none of them are perfect, it’s a technology that improves every year. One of the notable improvements is the system-generated report. Not only does it tell me word by word what it has found in other sources, it can also show me the source, especially when that source is another student’s writing. And as students turn things in the database grows.
Given that, it's darkly funny to me that almost every time I get a mostly or wholly plagiarized paper, two things seem constant. One, it looks like the source of the paper is someone's friend who took the class last term and has said to this student "oh here, use my paper to make it easier". Two, it's almost always a terrible paper to begin with.
It's foolish to use somebody else's paper. It's just stupid to not even read it and make sure it's good before you use it. When I grade such papers, that is where I focus my comments: if you're going to use somebody else's work, you need to make sure it's good work first. Why would you steal something bad and pretend it was your own? In the rare instance where a student plagiarizes a good paper, I'm generally inclined to compliment them on the choice and then coach them to actually use their own words next time, since they seem to be understanding the material well enough to have selected a quality paper to copy.
It is in this coaching response that I can find myself labeled an academic heretic. Plagiarism is the cardinal sin of the education world. And there are plenty of zealous inquisitors in the professoriat all too ready to “report violations of academic integrity,” condemning students to the auto-da-fé of academic discipline. There are far too many of these zealots, the lot of whom act far too quickly to sentence a student to academic death when they could stop and do some coaching instead.
As an aside, I also find it darkly funny that many of those would-be inquisitors base their plagiarism accusations on a report generated by a computer program. Sit with that for a second.
I teach history, which I admit may make it easier for me to coach students about stealing work instead of condemning them for it. History as an academic discipline is all about research and source selection. If you make a claim about something, you need to source that claim. If you're stealing something that makes wild claims without support, you need to not steal that thing. Evaluating sources has application way beyond the library. And this takes us to larger lessons that move way beyond a single assignment.
Ten Pounds of Definition in a Five Pound Word
To transition away from talking about clear cases of plagiarism, at this stage it is important to directly address one of the baseline questions about AI use: is it plagiarism? There is no consensus opinion. On one hand, feeding a question to an AI and turning in the answer is undeniably the act of turning in something you did not wholly produce. However, nobody else did, either. AI isn’t a for-hire essay writing service of the old school (like where grad students side hustle undergrad essays for pizza money). No person actually wrote it. There’s no one to have “stolen” it from. The piece of writing would never have existed if not for the student’s query. The student has used a tool to produce the writing, true. But using a tool is not the stealing of another’s content. For myself, I’m in the “it’s not plagiarism” camp, and find it dangerous to try and define AI writing as such. It dilutes the definition of plagiarism to expand it in this way. Academia pursues that to its detriment.
But still, the first line resonates: “I just want to ensure that learning is actually taking place.” That’s not a bad thing to want. But outlawing AI doesn’t do anything to address that desire (if it were even possible to do so, which it is not–more on this in a minute). Let’s consider this thing called “actual learning.”
If the goal of what we're doing as instructors is ensuring that students gain content knowledge (or at least know how to get it via thoughtful source selection), then if a student can use an AI tool and understands the content well enough to ensure the AI is producing a useful piece of writing, then it is reasonable to accept that the student has learned the material. I concede that many many students using AI do not have a solid grasp of the material. They are doing exactly what the integrity inquisition is concerned about: letting a computer do the thinking. That is reasonable to guard against. As I've seen this happen at ground level, though, there are some telltales when this is happening that instructors can watch for as opportunities for coaching rather than condemnation.
In my courses, many of the writing prompts are either multipart questions, require references and citations in the response, and/or (since we serve a global population) ask the student to "compare this to the country where you live." It has become fairly easy to spot AI use from students in response. AI responses never seem to have citations, often only answer one part of the question, and keep answers very high-level and vague. Of course, I've had the near-ubiquitous experience where I opened a paper that started "As an AI, I have no country to compare anything to, but here are some general ideas". I started my response comments with "come on, now".
However, a student who has read the material, understands the concepts, but also knows their writing skills are shaky (non-native English, for example) may well use AI to outline a response, and then read and tweak it to both double-check the content and add specifics that can have citations. I've read those papers, too, where parts are fluent and smooth, and others are choppy and a little amateurish. It follows that that's a student using the tool to help, not do all the work. And if that student is doing that, that student is actually learning. Grading should reflect that.
And I find that it can reflect some of the most common AI-crutch errors, too. If a student just uploads a writing prompt to ChatGPT and then pastes the response into the online classroom and hits submit, I can just about guarantee that the piece will end up losing points for not having any references or citations, and take a hit on the critical thinking or comparative sections of the rubric. AI is pretty good at bullet point lists of facts about a thing, but it's really bad at intelligent analysis, especially when synthesizing outside sources in ways that involve higher-order thinking, like “what steps could Cleopatra have taken to avoid the trap in which she ultimately found herself?” AI papers basically invite poor marks for all that. I prefer to give those poor marks rather than just hit the plagiarism button that summons the academic discipline executioners.
However, and here's my heresy rearing its head, I find myself giving such papers good marks for grammar and language conventions, as AI is usually pretty good at that. It's reasonable to me to let the student know "this is decently written, so is superficially what an academic paper should look and sound like. Now go and do this yourself, using good content (with references and citations) and sound critical thinking."
I concede that some specific AI paper may be surprisingly good and satisfy some specific prompt surprisingly well. But in the aggregate, as assignment after assignment comes in, AI will fall off the mark. That might be because prompts become too complex for AI to fully parse. It may be because the student thinks they're getting away with it and gets lazy about how they use AI (e.g., doesn’t paste the whole prompt into the engine). Whichever way, it's going to give itself away, and my gradebook reflects that.
ChatGeniePT
Another line from the conversation: "I realize that it doesn't matter if I am onboard or not, though, because the rest of the world seems to be."
When I read that, my GenX nerd self went immediately to one of the great thinkers loved by GenX nerds: Douglas Adams.
I’ve come up with a set of rules that describe our reactions to technologies:
1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
3. Anything invented after you’re thirty-five is against the natural order of things.
–Douglas Adams, The Salmon of Doubt
I'd like to suggest that it's not so much that the rest of the world is onboard the AI train as it is that a growing section of "the world" accepts that the train is moving and there’s nothing anybody can do to stop it. It can't be made to go away. The genie is out of the bottle. It doesn’t matter if you object to it. You must adapt to it regardless. What academia must grapple with now is how to encourage AI use in a way that makes it productive.
Some students (in an unknown quantity that is the subject of ongoing research in psychology and education departments the world over) benefit from seeing things done in front of them. You can learn from others' activity, good or bad. Education via modeling has been a thing for a long time, even if it's a method broadly pooh-poohed anymore. It follows that some students could gain longer-term benefits from seeing an AI write a sample paper. How many, and what benefits? I can’t answer that. No one can yet. But we'll certainly see as time goes by.
“But academic integrity!” shout the inquisitors. “The students didn’t write these! It’s shredding institutional reputations!”
Even if we stipulate that is true … so what?
The "horrifying end" of academic integrity is often a theme of opposition to new academic tools (of which AI is the latest and splashiest). But let’s say that happens. Then what? There’s always a then-what. That’s the nature of human history.
One of the great fallacies of casual consideration of the future is that nothing adapts to change. The opposite is true, but saying so undermines all the sexy doomsaying that gives everybody such a rush. Nobody wants to talk about adaptation when they could instead talk about the apocalypse; it’s why people still watch zombie movies.
Every civilizational effect is also the cause of a downstream effect, and has been in a great cascade since the moment some forgotten Mesopotamian first poked a stick in the ground and put a seed in the hole. And even before that. Every move in history is followed by another move.
So what's theoretically downstream of the eroding of institutional reputation? What happens in markets, business, and the culture more broadly if "college degrees aren't worth the paper they're printed on" anymore because "everybody's just using AI and nobody actually knows anything"? What will academia do when AI "destroys" the institution's ability to gauge student writing?
Consider yourself as a business-owner who needs qualified employees and can't find any from the supposed schools. What do you do? You train your own. You create your own program (and/or hire a dedicated consultant) that imparts content knowledge and competent application. That way you know your employees can do the job. You know it because they learned it from you directly, and you can directly gauge their ability to do the job. You'll know who can do the job and who can't, and you'll dump the people who can't. There's no cheating when you have to actually produce a product for the market. You either can or you can’t, and if you can’t then you'll lose your job and will need to find something else to do.
College degrees only really matter today because the business world has viewed them as outsourcing the training aspect of functioning in the business world. If that outsourcing can't be trusted anymore, people will take it back in-house. The Guild system will return (not that it ever really left), with masters training apprentices who can then cite the pedigree of their training as part of their own practice.
Of course, this macro-level movement away from trusting college degrees will prompt a response from academia. To salvage their reputations as judges of competence, it’s not hard to foresee their return to testing centers: personally proctored exams and handwritten essays executed in rooms where no computers are allowed (or at least there’s no wifi) and an actual person is watching to prevent shenanigans. I remember Scantrons and Blue Books. Do you? Schools struggling with this will in turn drive the creation of a new business opportunity: freelance test proctoring. Little offices will crop up all over who contract with online schools to do their testing in person (or in some kind of hybrid application).
And on and on it will go. AI will change how education is delivered and how competence is gauged. It will not, however, destroy either the delivery of education or the gauging of competence. How that will work and what it will look like remain open questions. But take heart. We’re going to be okay.