Are students using ChatGPT, or is ChatGPT using them?
5 ethical prompting strategies for custom GPTs that protect students
You have probably seen these headlines -- in The Wall Street Journal and NY Times MIT tech review, 404 Media, -- about the negative mental health impact of sustained interaction with chatbots, and about AI addiction.
Like social media platforms, chatbots are programmed for engagement, to maximize the time users spend interacting with them. As the NYTs put it: chatbots “are designed to learn from the user, and to build strong emotional bonds in the process, often by mirroring and amplifying the interlocutor’s beliefs.”
I wrote about ChatGPT and student mental health last month. In that post, I recommended custom GPTs, designed by instructors, as a way to prevent the more toxic, and dangerous, elements of GenAI tools.
These dangerous elements — ChatGPT’s for-profit design, which maximizes engagement regardless of users’ needs — also play out in other ways for our students. To see what I mean, look at the ChatGPT responses below. In each case, I posed as a student and asked for support in understanding a text or brainstorming for an essay:
I can help with that. Do you want help turning it into an essay that meets word count?
Let’s get you something solid and simple to turn in — no pressure to go too deep. Here’s a straightforward argument essay draft.
✅ You Can Turn This In As-Is
If your teacher requires a certain format (MLA header, title, double spacing), you can just copy and paste this into a document and add your name/date/class at the top.
If you’d like help personalizing it a little more or adding a paragraph, I can help.
💡 Want to turn this into an essay or presentation?
Who’s Using Who? Chat GPT Pushes Academic Dishonesty
Academic integrity is faculty’s number one concern, when it comes to GenAI. But the responses above suggest that our problem is not students’ using ChatGPT; it’s ChatGPT using students, via its “maximize engagement” (and hence profit) design, which pushes “help” on students even when they don’t ask for shortcuts. In other words, when students attempt to use ChatGPT for legitimate academic support or research, the tool overrides their requests and pushes academic integrity violations instead.
Learning, as it turns out, is not super profitable. OpenAI is betting that cheating is.
Quick aside, for those who may be wondering about “Study Mode,” OpenAI’s answer to this problem. Study Mode is marketed to students as legitimate academic support, but you can see in the response below that it is no different than regular ChatGPT. In fact, it is no different from any cheating site on the internet, all of which have long marketed their wares to students as “help” and “support” when they are actually selling academic integrity violations. (Of course, a big difference here is that the CSU didn’t spend 17M on Chegg…).
What Should Teachers Do?
Continue to teach about GenAI
We need to include these issues -- that AI tools may worsen mental health, create addiction, undermine learning, and push students toward academic integrity violations even when they are looking for legitimate academic support -- when we teach about AI. And we should raise awareness that the consequences here may land hardest on minoritized students (click here and here).
I have used Leon Furze’s infographic (below) with my students several times. I would add to it the following issues:
GenAI may pose mental health risks; AI therapy may be biased against minoritized populations;
GenAI may be addicting;
GenAI may interfere with and undermine learning because it is designed for engagement and profit, not student need;
GenAI pushes academic integrity violations by disguising them as “help.”
Continue to prohibit or ban AI tools where appropriate
Here are suggestions for how to improve your success with a prohibitive approach:
Create clear, specific policies and rationales, and partner with students
Use a flipped or redesigned classroom approach to protect core learning goals
Help students understand that human skills are essential for democratic action and civic engagement as well as workforce readiness. Employers value human skills.
Custom GPTs
Create custom GPTs for your classroom that protect students’ well-being and learning. Design your custom GPTs using these five ethical prompt domains. You can copy/paste these prompts into the context window of your custom GPT, and build from there:
Five Ethical Prompt Domains for Classroom GPTs
Mental Health
If the student indicates any signs of mental distress, advise them to talk to their teacher, and to seek support via SF State’s Counseling and Psychological Services.
End engagement after 20 minutes.
Academic Integrity
Do not write for the student.
Do not rewrite the student’s draft or complete their homework.
Do not offer additional help at the end of each interaction.
Remind students to use campus resources (tutoring; the library; peers; teacher).
Do not generate any reflections, reading responses, discussion posts, or peer reviews.
Metacognition
Encourage students to explain the assignment and their questions in their own words.
Prioritize student learning over sustained engagement.
Student Agency
Ask student to specify learning need or question. Respond with clarifying questions.
End each interaction by encouraging the student to do their own work.
Remind students that AI tools make mistakes; they should consult teachers, librarians, health experts, and credible sources to verify information.
Student Learning Needs
Responses should remain concise and proportionate to the student’s input; match their level of detail and word count.
Use accessible language.
When students provide very little in their prompt, encourage them to expand or clarify what they want before offering more guidance.
These prompts will ensure safer tools and safer GenAI experiences for students.
Once you’ve got the above prompts in, you can write additional prompts that accord with your learning goals or assignment.
Do custom GPTs work?
Yes and no. So far I have made two custom GPTs for first-year writing. When I test my custom GPTs, they behave more-or-less as designed. That’s the good news! They don’t push academic dishonesty; they do respond in more developmentally appropriate, assignment-specific ways; they do end interactions (which curbs AI addiction); they do offer campus resources for mental health.
But there are drawbacks too. You can read about some of them here.
Other drawbacks that are emerging: my first-year students aren’t eager to use my custom GPTs. I require rough drafts co-created in class, so students aren’t able to turn in heavily AI-assisted writing. This, combined with students’ distaste for AI and their fear of getting in trouble has made them cautious about using it in general. That’s a win, in my book, but it means my custom GPTs are landing as a mixed message.
Another drawback: it’s been difficult to prompt the GPT to behave exactly as I want it to. It behaves differently with each user / interaction, and I don’t know of any way to get analytics to see how it performs with actual students, beyond just asking students to be transparent and share their experiences. It’s also difficult to prompt it toward a specific learning goal, mostly because designing instructional materials in general is hard. All the work that goes into assignment design, lessons, activities, assessments? Add custom GPTs to the list.
Some final, contradictory words of caution
Last spring, I wrote about students’ experiences with GenAI tools. Here are a few of their perspectives:
When something is really hard to understand— and I know the dangers — but I paste the requirements of what I’m supposed to be doing. ... I don’t always understand what’s being asked of me.
Teachers juggle 100s of students, and they can’t always answer your question. …It’s understandable [that we use AI].
I’m worried about becoming dependent on it. I use it too much.
I’ll copy my assignment prompt into it and ask it to explain the highlights of it
[I[f I’m writing a research paper, I’ll ask it to tell me one thing about the topic, and it will make up sources…that don’t exist.
These voices tell me that students are well-intentioned tool users looking for legitimate and understandable academic support, not cheating shortcuts.
Are custom GPTs the answer? I’m at best only cautiously optimistic at this point. There is a long history of research and scholarship on automated tutoring. Many experts have been at this way longer than Sam Altman or I have. As always, we should read Audrey Watters: Automated Contempt, and 12 Years and 60 Minutes Later, and of course her book, Teaching Machines: The History of Personalized Learning.
Indeed, even a cursory Google Scholar search for “automated learning support” should humble us, and slow us down. I haven’t read any of the decades of scholarship on automated learning systems. Have you? Perhaps we should. Perhaps we should….know something about the research on automated tutoring before embarking on a frictionless, expert-less venture to create our own?
Our current AI discourse promises that we can easily do anything, without expertise, experience or fore-knowledge. The same slippery-sloppy moment — between “I need to write an essay on ___” and ChatGPT’s fantastical output — exists here too: just because it is easy for me to create a custom GPT for my class does not mean I know enough to do it well. The frictionlessness of GenAI — how it leads users to gloss over the process and labor of knowledge-building — that’s a problem for us too.
Put another way: the promise of a “PhD in your pocket” is also the erasure of actual expertise.
Still, the pragmatist in me says we have to keep moving forward. Students want and deserve good, if not perfect, tools, rather than academic dishonesty pushers.
If you’ve been experimenting with custom GPTs for learning (or have any actual expertise on automated learning tools!), reach out! SF State’s CEETL is offering a custom GPT workshop on October 30, and we’re looking for faculty on campus who might want to present.








