Whether you think AI writing is the wave of the future or whether you think/hope it will get regulated into obscurity (God, I’ve seen what you’ve done for others…), it probably makes sense to figure out what to do with it right now.  You’ve got students using it,  you’ve got edutech grifters pitching expensive nonsense, and you’ve got administrators itching to Do Something (and probably buying the expensive nonsense)–what about you?

That question–what should you do–isn’t just about the reactive solutions to finding out that students are using AI when you don’t want them to.  Instead, I think it makes sense to think about it as a matter of existential security: In the long term, for the health of your industry or your field or your university or humanity or whatever, how do you want to influence the way that students use AI programs?  You probably don’t have that much sway, sure.  But let’s try to nudge the wheel a bit.

Instead of just “embracing AI” in your class, though, I think it makes sense to be really clear about what you want your students to get out of it and why. To that end, below you’ll find suggestions for integrating AI into your courses and assignments, organized according to the general presuppositions that I hold about how it should be done. The first three pair a different educational task (developing writing mechanics, developing argumentation and ideas (content), engaging with outside sources) with my reasoning, and the fourth is a set of suggestions for how you can deal with AI writing that you did not ask for.

You might disagree with these, that’s fair. Some of them align with the Office of Educational Technology’s general suggestions, but I’m probably a little less generous than they are. You can also find my annotations on AI generated writing here.


 

On mechanics: If we’re going to use AI to teach, then the point should be to make students better “traditional” writers and not just good stewards for AI writing programs.

A lot has been made about the mechanical prowess of bots like ChatGPT. In fact, it seems to be the one thing that they are universally not terrible at:

"The first indicator that I was dealing with AI was that, despite the syntactic coherence of the essay, it made no sense."
Darren Hicks (Furman University) as quoted in “Chat GPT Is So Bad at Essays That Professors Can Spot It Instantly” (Laurie Clarke, VICE, 3/2/2023)

And, it kind of makes sense that machine learning algorithms which have been trained on vast amounts of random text would be able to model coherent writing. However, as Professor Hicks points out in the article, above, the ability to “write well” doesn’t always correlate with actually writing anything useful or interesting or even cogent. So, let’s set aside the actual substance of what students should be trying to say in their writing and instead use it to play with mechanics.

Paraphrasing exercises: From a pedagogical perspective, the best thing about ChatGPT is that it is non-deterministic (i.e. the same prompt can result in many different responses, both in form and content). For all the ways that students struggle with paraphrasing, one common challenge that I hear has to do with figuring out different ways to say the same thing: “I can’t think of another way to write what this author already wrote.” Sometimes this has to do with the simplicity-or-complexity of the thing that they are summarizing and sometimes it has to do with the length that they are shooting for (“I don’t know how to say all of this in just one sentence”). Assign students an article and then ask them to write short summaries of varying lengths (one page, one paragraph, one sentence).  Then, have students prompt an AI program like ChatGPT to do the same thing.  You can have the students write a short analysis of the ways that their paraphrases differ from the robot’s, asking them to compare the different themes/ideas that they and the AI highlighted, the different signaling phrases that were used, and the overall outcome of both samples (i.e. in what circumstance would one or the other be more useful, what information is emphasized or occluded in each, and why might that matter).


On content: If AI writing becomes widely adopted in industry (i.e. if you think students will be expected to use AI writing in their careers) then just being able to prompt AI chatbots effectively will not be enough.

Look, I think it’s probably clear where I stand on AI writing. But setting aside me being a sourpuss, I also think there are pragmatic considerations for how you should use it in class.  If students are actually going to have to use this in their work later on, then it isn’t enough just to send them on their way with some best practices. As an analogy: the expectation for early career scholars/professionals is that they can use Microsoft Word (or Google Docs, or any number of other bespoke programs and packages) proficiently. That’s the ground floor. But just being able to use them–even using them well–probably isn’t enough to get a second interview.  It doesn’t seem all that farfetched to assume it’ll be similar if AI writing becomes (more) widely adopted.

Generative exercises: I am a fervent believer that if a robot can write it, a person can write it better. Some of that relies on the ability or interest of audiences to sniff out unique, human expressions.  But some of it also depends on whether writers understand AI-generated text as a model of what could be written instead of a model for how to write. This is especially relevant to AI writing in certain professional/industry settings, I think–advertising copy, social media stuff, public relations, that kind of thing. Pick a genre of writing (or some standard writing task) that is common to your industry. Then, present students with examples of AI writing in that genre on a specific prompt. Have students evaluate the samples–if you already go over the expectations of the genre in your class, great, if not, have students develop a rubric for how they will evaluate the text. If it makes sense, have students assume the perspective of different audiences as they evaluate (i.e. what would a potential client think of this, a competitor in the industry, a boss that you are pitching to, etc.). Finish by having students present their own writing sample in the same genre on a different prompt, and ask that they write a brief reflection explaining why they wrote it as they did using examples/evidence from the evaluation exercise.


On ethics: It is irresponsible to encourage students to use AI writing tech without considering its ethical implications (esp. regarding plagiarism in its responses and the training of machine-learning algorithms).

Right now, when it comes to the way that AI programs engage with sourcing and attribution, there are three major problems:

  1. The data sets that models like ChatGPT are trained on are not public. There is (as of this writing) no way to find out if an individual piece of writing is part of a training data set, and therefore no way to know whether an AI response is pulling from a piece of work that it ought to cite.
  2. Most AI writing programs suck at attributing ideas to existing pieces of written work. More often than not, you will find that AI writing programs “hallucinate” sources–they make them up entirely, attributing ideas/arguments/facts/etc. to fake sources. Those facts can often be incorrect, as well.
  3. Setting aside the problem of whether (given the modeling practices described above) AI writing can be considered truly “generative,” there is no standard for how students ought to cite their own use of AI writing bots.

Intuitive exercises: I think some of this is beyond the scope of any individual course, or even university. Broader academic and professional bodies need to establish clear guidelines for how they want to treat AI work, pedagogically.  But, in the meantime, you can encourage your students to develop an intuition for engaging with AI-produced text (i.e. a bullshit detector). Assign students a brief response essay that they should complete only using an AI bot like ChatGPT. Have them prompt the AI to support and cite its arguments. It might help to do this a few times–since the programs are non-deterministic, this should help students assemble a larger set of references.  Have students cross-check the references.  If an individual reference exists, ask students to evaluate the way that the AI used it in its response; if the reference does not exist, ask students to try and uncover where the AI might have gotten the claim/idea/fact that it is attributing to the fake source. Here’s an example of me doing something similar.


 

On evaluation and grading: Focusing ondetecting” AI writing as a matter of evaluation is probably a waste of time. If you are concerned about students cheating with AI programs, then it will be more fruitful to design writing assignments that are hard to complete using AI.

A lot of energy (and presumably money) has been spent to help people determine whether or not something was written by AI.  It’s been kind of a one-sided battle:

"OpenAI has pulled its AI Classifier plagiarism detection tool due to low accuracy in determining human vs AI created content."
“OpenAI Just Quietly Shuttered Its ChatGPT Plagiarism Tool” (James Laird, tech.co, 7/25/2023)

It probably makes sense to try to limit the ways that students can use AI to complete assignments that you don’t want them to use AI to complete. And though I don’t buy that AI tech will inevitably improve in all the ways that its proponents say it will, I do think that it will probably improve faster than our mechanical detection capabilities will (at least without some kind of regulatory meta-data solution). So why chase it?

Instead, I think that the best way to make sure your students are only using AI when you want them to is to be explicit about it–clarify how/when/why you want them to write traditionally or using AI, and clarify the stakes for doing so.  This is basic honor code stuff, but I think it does make a difference if you can get the buy in from students that they need to not use certain technologies at certain times in order to actually benefit from a class.

But not always, and for lots of good reasons. Rather than focusing on detection or punishment, then, consider how you can make it more cumbersome to use AI to complete the work that you want your students to complete:

  • Break assignments up into discrete steps where student work is interrupted by feedback from you or a peer.
  • Require reflections or evaluative writing where students have to explain why they have written something the way that they have.
  • Clearly introduce places where you want students to use AI tools (as in the examples, above) and make sure to differentiate the goals of those exercises from a “normal” mode of working.
  • Ask for varying modes of presentation or have your students utilize different, less standardized genres of writing
  • Require students to engage with primary source materials that programs like ChatGPT probably don’t have access to.  Or make things up. 
  • Have students annotate their writing with comments about argumentative structure (i.e. have them note where they are making claims of fact versus claims of value versus describing evidence versus explaining evidence etc.)