Concerned About Academic Integrity and AI?
Now that CSU students have a university-provided Chat GPT account, many of us are wondering about academic integrity, especially when it comes to writing assignments. Is the university … actually encouraging students to use Chat GPT for their assignments? (short answer: no). What’s the university’s policy, when we suspect that students are using AI to write their essays, reading responses, even peer feedback? Are there any technical solutions (like Turnitin) that will work here? And what are we supposed to do, right now in our classes, to ensure that students still learn to write, and benefit from the practice of writing?
The answers aren’t easy.
For starters, there are no good technical solutions. Tools like Turnitin or ChatGPT Zero can be biased against multilingual writers, and have an unreliable track record when it comes to identifying AI writing. Simply put, these tools are not reliable enough to base an accusation or integrity violation penalty on.
Secondly, students’ access to AI tools that make cheating easy have increased exponentially, and students’ skill at using these tools in a variety of ways that don’t quite rise to the level of cheating is also increasing. Policies will have to be carefully designed to target behaviors that harm learning or undermine academic integrity, while allowing for new, legitimate AI practices to emerge.
The good news is that the policing approach to academic integrity has always been problematic, and there are actually many strategies that we can use instead. Policing students undermines our efforts to build a community of writers in the classroom. It positions teachers and students as enemies rather than partners.
In addition, and I think this is especially important now, policing approaches reduce the complexities of academic integrity to a simple rule: cite your sources -- depriving students of the underlying understanding they need to actually write with integrity.
Most crucially, policing approaches land hardest on our most vulnerable students. Consider this quote (shared with permission) from one of my students last year: If I write like myself, I get points off for not following the rubric. If I fix my grammar and follow the template, my teacher will look at me and assume I used Chat GPT because brown people can’t write good enough.
As teachers, yes, we can sometimes spot machine-generated writing, but our ability to detect is also biased: we can detect AI when it’s used clumsily by a student. But students who already have the skills to use it well often fly under our radar.
Thus I like this quote from Jessica Adams Gregorioff:
More on the “good news” front: our senate at SF State passed this resolution about AI use last year.
And, our existing, pre-AI senate policy on academic integrity is actually robust and works for me, as a writing teacher, even now, in the age of AI. This policy prohibits fraud of all kinds when it comes to assignments. But it also contains this:
I like this part of the policy because it encourages us to make academic integrity a key component of our teaching. It elevates important values underlying academic integrity: transparency in knowledge construction, the importance of students’ intellectual growth, equity, creativity, and learning.
It encourages us to TEACH rather than police academic integrity.
So how do we pivot from policing to teaching? What can we to discourage inappropriate use of AI in writing assignments? Here is a list of strategies from my own first-year writing classes. These strategies are not foolproof, but I can say that I’ve tried all of them, and they have helped to deter inappropriate reliance on AI.
Create your AI policy
There are many policy examples to choose from (click here); I recommend finding one that aligns with your teaching philosophy, and tailoring it to your needs.
Once you’ve created the policy, include it in your syllabus, but also in assignments and assessment materials. Tie it to the learning goals of your course.
In my first-year writing classes, one of my learning goals is to ensure that students are able to authenticate themselves in their writing since authentication skills are increasingly important. I share with students that Anthropic recently banned AI-generated cover letters and resumes. I share this video, made by an SF State student, who lost out on a job because her application materials were flagged by HR as AI-generated.
On my syllabus and in each assignment I include the following requirement:
Convince your reader of authenticity. Sentences or passages that sound fake must be rewritten for credit. Meeting this requirement will prepare you for writing in the kinds of academic and professional contexts where automated or synthetic writing won’t suffice.
Notice the word “sound.” I emphasize to students that no matter what tools or strategies they use as writers, their goal is to create a persuasive ethos in their writing, in part by sounding human. My conversations with students about this sometimes look like this:
Me, highlighting a sentence or passage in a student draft: This sounds kinda fake?
Student: No, I wrote that!
Me: That’s great, but it sounds sing-songy, like a robot? So it’s not really convincing me. What if you revised it by….
This kind of feedback tells students how their writing choices are impacting their reader. It is not unlike the feedback I give to students who overly rely on rigid school genre forms such as the five paragraph essay: “this sounds like an essay written for a grade, but not to persuade. What if you revised by…”
Respond to the writing, not the tools
Similar to the above, this tip helps me avoid fruitless conversations with students where I try to get them to admit they used AI, and they steadfastly deny it. Instead, I provide feedback that shows where the synthetic writing fell short. I’ll say things like “this sentence doesn’t really take me anywhere as a reader. It’s a general summary. But as a reader, I wanted more argument and analysis from your perspective.”
Or: “This concept is interesting but it doesn’t connect to the text we read. When you revise, my advice is to take this out, or add a lot more context that helps your reader understand the connection you’re making.”
Or: “These ideas have a big history behind them, but I don’t see any citations. When you revise, my advice is to take this section out, or acknowledge the sources behind the ideas, so authors get credit for their work.”
Or: “In this section, I couldn’t find your analysis or argument. Remember our goal in these essays: to provide your take on the issue, so that your reader gains a new or deeper understanding. To revise this section, I would either take it out, or recast it so that you’re addressing questions like: Why do you think that… ? What has been your experience with …? What about the problem of…?”
Or, simply: “What does this phrase mean to you? Define and connect to your argument, or take it out.”
I also point out hallucinated or fabricated quotations or facts. This does not require or necessitate an accusation of plagiarism or AI use. I simply tell the student “these quotes aren’t in the text you’re attributing them to” and ask them to take them out and/or revise for credit.
Make integrity part of students’ rhetorical education
In the writing program we teach academic integrity as ethical rhetorical decision making in the context of linguistic justice — in the context of developing voice in underrepresented student writers.
That last principle in the image above — that writers construct knowledge — is at the heart of information literacy, which is already one of our program SLOs. When we teach information literacy, we teach source citation, but we also teach the why’s of citation, helping students see that transparency is an important part of knowledge construction and gaining readers’ trust. This leads to my next tip:
Make students accountable for what comes after their byline
Students strongly identify with the idea of byline accountability, in part because of their long experiences with social media and online writing. Students do not want to “platform toxicity” or reproduce biases. They know what it means to be called out for saying something they didn’t mean.
Last semester, I demo-ed what byline accountability meant for me, as a reader of their work. I showed students a chat gpt-generated analysis of Amy Tan’s “Mother Tongue.” The automated essay was a relatively convincing analysis that was sympathetic to Tan’s struggles as a multilingual child of immigrant parents.
But the AI-generated essay included this line:
“In the end, Tan struggled because of her mother’s grammar mistakes and broken English.”
This line jumped out for my students -- it is the exact opposite of the point Tan is making and it reinforces the linguistic racism she is critiquing, by locating the problem in Tan and her parents, rather than in societal biases. Engaging students in a conversation about their responsibility as writers — and about how carefully one must read and analyze synthetic writing for bias and misinformation before one puts their name on it — disincentives inappropriate use.
Center student meaning-making and create buy-in for assignments
I ask students to share their writing with our class on a regular basis, during a “Writer’s Chair” activity that puts the student author at the center of class discussion for a day. Their classmates are their audience, and we read the student text as we would a published article. Students over the past few semesters have been very clear that they are bored and uninterested in reading synthetic boilerplate when we do Writer’s Chair. They, like me, want to hear their classmate’s actual ideas, voice, and perspectives. So engaging students in community reading of each other’s work, either through peer review activities or as models or texts for class discussion, can go along way in promoting academic integrity.
Students are also quite frustrated with bot-generated peer feedback, which we can leverage to discourage AI use.
Before I set up peer review activities, I remind students of the Golden Rules of feedback. If you don’t want someone to Chat-GPT your feedback, then show them the same curtesy. Similarly, just as you don’t want someone to upload your draft into an AI platform without permission, don’t do that to your classmates.
For more on creating buy-in, check out this guide that CEETL put out last year.
Focus on metacognition and the writing process
I require that students turn in notes and outlines that connect to their final drafts, and that they create a “statement of goals and choices” about their own writing. The statement of goals and choices asks them to comment on strengths and weaknesses in their essay, and to explain their rhetorical choices. I sell both these practices to students as “proof of integrity” insurance. Many students fear being accused of AI use, and appreciate having a way to “prove” they didn’t. Plus, developing a strong writing process and metacognition around writing are important SLOs in any writing class.
Close the gap between classwork and homework
I’ve made two recent “close the gap” tweaks to my course design that have made my assignments more AI-resistant. First, I now base assignments almost completely on class discussion. The writing prompts ask students to incorporate notes, ideas, or quotes from class discussion (bonus: this helps with attendance too). It requires that students build on an idea or question raised in class, or to quote a classmate in their analysis.
Some of my assignments ask students to analyze each other’s writing. A current assignment, for example, asks students to explain how the perspective in a classmate’s essay differed from and also built on the argument offered in the assigned reading.
Second, I moved my essay deadlines closer to the end of my last class period for that week. Previously, in a MW class, I made essays due on Sunday night. I now have them due on Wednesday night after our last class meeting for the week. This ensures that students are writing when their ideas from class are still fresh. They are less likely to feel like they need the crutch of AI.
Teach critical AI literacy
Students are more likely to misuse text generators if they trust them too much.
Lessons involving Critical AI are a key way to teach academic integrity. Academic integrity instruction happens in my class when I help students critique mechanistic approaches to writing, especially when those approaches reproduce bias and promote inequality. Academic integrity happens in my class when I encourage them to resist efforts to standardize their writing. When I emphasize that writing is an opportunity to make their voices heard, to be creative, to learn, and to make their worlds a better place.
A critical -- even resistant -- approach to AI technology is aligned in profound ways with the social justice mission of our campus. Check out the CEETL page on AI for more.
Listen to, and yes trust, students
In the Writing Program at SF State, we talk openly with students about their writing and their learning, focusing on critical literacy, and writing as meaning-making. We center value linguistic diversity and celebrate students’ diverse ways with words. Last year, I asked my students what teachers could do to help them avoid inappropriate AI use, when it comes to their writing. I’ll conclude with a few of their responses:











