When it comes to AI, “there is much more going on the human side of the screen than necessarily on the digital side.”
—Emanuel Maiberg, 404 Media
What does “the human side of the screen” look like, when it comes to student use of GenAI?
We know that student use is high. How high? Check out this post by Annette Vee for some eye-opening statistics. Based on these numbers, as well as her own qualitative data, Vee concludes:
(1) students often think they know about AI because they’re users of AI; (2) students need more examples of what productive AI use could be because most of them are finding uses on their own—not all of which seem great; (3) most faculty have no idea how prevalent, varied, and complex AI use is among students, and students sense that gap.
In an effort to bridge that gap, I recently visited Dr. Aviva Sinervo’s upper division psychology writing courses at SF State. Aviva’s curriculum is packed with opportunities for students to design their own writing topics, engage in disciplinary research, give and receive feedback, and embrace the drafting process. Aviva does not ban GenAI in her courses but rather focuses on building a community of writers who routinely share their drafts with each other. She shows students how to follow APA guidelines for GenAI use and emphasizes the importances of human peer feedback.
When I visited Aviva’s classes, I rotated through small groups, asking each group: How did AI fit into your writing process for this class? Do you use AI in other contexts? Does AI help or hinder your writing process? and Did you find it easy or difficult to follow APA requirements when it comes to your AI use in this course? I took notes in real time as students talked, trying to capture as much of their words as I could. I transcribed directly from those notes, edited for typos and occasionally clarity. Students gave permission for me to share their perspectives.
So, what did I learn? Like Vee, I learned that on the ground, the story is complicated. Students were generally not using GenAI to outsource their learning or to provide easy shortcuts for their writing in Aviva’s class. Those kinds of GenAI use fade into the background when teachers create the sorts of deep learning and collaboration opportunities that Aviva created in these classes. But beyond that, as Vee says, the story of students’ GenAI use is complicated. As Brett Vogelsinger has recently noted, we may need to expand beyond the verb use when it comes to GenAI.
Here are six themes that emerged from my discussions with Aviva’s students:
The Student Side of the Screen
Disclosure is tricky
Although GenAI disclosure is considered a best practice, students don’t always see the need for it. Why? Because they are using GenAI to brainstorm and get ideas, rather than to write drafts for them.
This may be a distinction without a difference, but students viewed the prompt they gave to AI as their content, and because the AI output was based on their prompt, they felt it was also their own.
This view reminded me of theories of distributed cognition: if you ask what time it is, and I glance at my watch, has the watch told the time, or have I? For the students I spoke with, the watch is merely a tool; the knowledge of time belongs to, and emanates from, the person using the tool.
I think if you’re someone who struggles with creative output, and you’re using ChatGPT, then whatever comes out of ChatGPT is still coming from you. It’s maybe AI - inspired, but it’s still you. Because to get anything good, you have to put a lot in, so then that is you.
I still do my own work. The prompts I use, I have to give it a lot of context. “Imagine you are writing a paper and this is where your discussion is currently at. Summarize it as an oral presentation.” Putting in the context is all me and the context creates the output.
ChatGPT is no different than a teacher’s slideshow or lecturer notes. You are just getting material in a different manner.
When you ask it questions, the questions are coming from you. Even if Chat gives you a soulless answer, the question is the main part that is important. That’s part of the creative process, asking the questions.
It’s never a bad idea to get more topics and ideas and resources. It’s only when you wrongfully take credit it for. But here the ideas and topics are coming from you, and you can cite the resources it gives you, if it provides a citation link.
Students felt that using GenAI for brainstorming was unremarkable — like using a Google Docs template to format your headings, or searching the internet to learn about an idea. Aviva’s students cited GenAI, per her policy, when they took ideas directly from it (e.g., in asking it to help them come up with research questions) but did not see the purpose of citing or disclosing more amorphous brainstorming sessions. They weren’t trying to hide this type of GenAI use; they just didn’t feel it rose to a level that required citation.
My take-aways — e.g., things I am thinking about for this fall:
We need strong classroom relationships that center students’ learning -- as Aviva’s classes do. Relationships create a culture of trust that facilitates open discussions of GenAI use.
As Aviva says, we should remind students that if they use GenAI for brainstorming, the ideas must start with them and must be rooted in their own genuine interests; the output must be checked for accuracy and relevance.
Students may need more understanding that brainstorming with GenAI produces an output that comes from — we might even say “has been stolen from” — other humans. GenAI may be merely extending students’ own thinking, but it is also built on the work of others, and may leave out perspectives that have been historically marginalized. Understanding this may motivate students toward greater transparency and citation in their use. Here is an excerpt from the AI syllabus policy I plan to use this fall:
You are responsible for ethically citing copyrighted material or others’ intellectual property, and for empowering your reader by making it clear to them how your text was created and where your ideas came from. You must seek out the authors and writers who originated any ideas you gain from AI, and cite them. In addition, I recommend seeking out perspectives from communities that have been historically silenced and that may be left out of GenAI output.
Impenetrable Assignments
Several students said they used ChatGPT to get a handle on what professors are looking for in assignments. They upload the assignment and ask ChatGPT for process steps, hidden criteria they are expected to meet, or strategies for getting started. One student showed me a recent assignment from another course she was taking (not Aviva’s): the assignment was a page-long block of text, with no indentation or headings, and no list of scaffolding steps or assessment criteria.
A lot of assignments are like this, where it’s unclear what the professor is looking for. So I ask ChatGPT, and it will give me the criteria I should follow and like, a list of where to start.
When something is really hard to understand— and I know the dangers — but I ask it to dumb something down for me. I paste the requirements of what I’m supposed to be doing, and it tells me it in a better way. It’s a lot of stuff I don’t understand and it will make it make sense. I don’t always understand what’s being asked of me. Maybe it’s just my brain, but I don’t get it, the assignment.
I’ll copy my assignment prompt into it and ask it to explain the highlights of it, and when you read the bullet points, it’s like having a peer review before you start working on it.
My take-aways:
Same as it ever was: transparent assignments are important. Aviva’s students didn’t need to ask ChatGPT for help with her assignments because her assignments are clear and accessible. She provides video guides (even for her in-person classes) that explain each task and assessment criteria. If you’re interested in assignment redesign, I recommend this article, about the hidden expectations and assumptions that are too-often embedded in our assignments; see also the TILT model and Accessible Syllabus.
Same as it ever was: give students class time to parse assignments and ask questions, so that they don’t feel they need to ask ChatGPT.
On the other hand: OpenAI (among others) offers instructors the possibility of creating custom GPTs trained on their own course materials that could handle student questions about syllabi, assignments, and course materials. An animated archive of one’s own course materials sounds intriguing (and this metaphor — of the animated archive — is one of the best I’ve come across for GenAI). But then I read things like this, and I worry that budget-conscious administrators may see custom GPTs as a replacement for tutors and human support systems or use it as a justification for heavier teaching loads.
Writer’s Block and Neurodiversity
Several students cited neurodiversity and learning disorders that lead to writer’s block especially in classes that don’t provide time and support for writing, as Aviva’s did. Students feel like they have great ideas, but don’t know how to say them correctly; they get “stuck” when faced with translating what’s in their head onto the page; they don’t know how to turn their freewrites into a rough draft; the required formats and genre conventions seem daunting or alienating.
With ChatGPT I use the voice feature and just talk about my ideas, and then I tell it the format and it really helps me get the paper done.
I started using it because I was worried that I was at a disadvantage by not using it. So many people are using it. It facilitates your thinking process. It helps you expand on your own ideas. If I need more information, it’s so elaborate. I would have found it if I'd researched it myself, but this is more accessible.
It helps with rearranging sentences, to get a better flow. It helps me to format quotes.
I have ADHD so I struggle with expressing myself. Inside of me doesn’t come out. I try, but it does not seem to work. But ChatGPT really helps me formulate and organize everything.
I work with design and creating designs, so I’ll ask AI to help me put my vision to life. It helps me get ideas out of my head.
I have ideas that I can’t put into writing. ChatGPT helps. I use it to help me create outlines. Because I can’t think that way sometimes.
In high school my teachers wouldn’t allow me to write in how I wanted to write. I’m always writing in “how I should be writing” mode. I question my writing A LOT. I was brainwashed. So when I have to write in that way, I use ChatGPT, because it does that kind of writing perfectly.
Todd Walker and I have written about how GenAI may support students’ writing process, easing some of the cognitive load of drafting for novice writers, functioning in much the same way as teacher-provided templates and formulas like those in They Say / I Say.
I’m inclined to listen to students who say that GenAI unlocks their thinking. I know students find the process of writing difficult and that they struggle to write in unfamiliar genres without losing their voice and agency. My take-aways:
Provide more instruction on the writing process. I like Bruce Ballinger’s The Curious Researcher which takes students step-by-step through research and drafting.
Provide more time in class for students to work on process steps and drafting. I love Aviva’s take here: It’s really about multiple opportunities to practice…. students draft their argument in a myriad of different ways, adding layers as they go from questions to hypotheses to evidence to analysis and argument. This happens across the semester. So part of this is about convincing students that it needs to evolve over time, rather than be perfect at the beginning.
GenAI Can Learn to Write Like You
Many students spoke of efforts to humanize AI-generated writing, including putting errors in. One student suggested a new term: “un-editing,” and described it as the opposite of what you would normally do before turning in an essay. You have to put some errors in, to make it sound authentic.
My high school teacher told us to read out loud for our work, to catch the problems and fix the grammar. Now we do the same thing, but in reverse, so it sounds more legit.
Instead of copying it, I reword it. It takes longer than it expects. It might have been easier to write it myself. It doesn’t sound like you. So you have to sit there and change every word. I’m “un-editing” it.
Students noted that you can avoid the work of “un-editing” by training ChatGPT to “sound like you.” The dystopian scenes of hybrid GenAI-human writing that Kyle Chayka described way back in 2023 have arrived, apparently.
OpenAI itself markets this “personalization” — the user’s ability to orient ChatGPT to one’s personal preferences, and to train ChatGPT to adopt the tone, opinions, and voice one chooses:
With our updated settings, you can tell ChatGPT the traits you want it to have, how you want it to talk to you, and any rules you want it to follow. If you’re a scientist using ChatGPT to do research, you’ll want it to engage with you like a lab assistant. If you’re caring for an elderly family member and need tips or companionship ideas, you might want ChatGPT to adopt a supportive tone.
The students I spoke with were aware of these capabilities, and hence not concerned that using GenAI would result in a robotic tone, where everyone sounds the same.
I have noticed that it starts to sound like me, after a while. That can really help with feeling like you’re not cheating. If it sounds like you, and you’re putting in the questions, is it really wrong or even harmful?
You can train it on your writing. If you upload a lot of your academic papers, it will write one in the same way, so it doesn’t sound so fake and also you don’t sound like you know more than you do.
I try to make sure it sounds like me. I will give it stuff I’ve written, and tell it to revise so it is more like that tone, and level. Sometimes I put errors and weird words back in, even though it might not be grammatical, it’s not to the point a teacher would care, and it sounds more authentic.
I ask it to stay in my tone. I give it old writing. Then I could cut and paste it. But I read it over, and there are places where it doesn’t work, and then I have to fix it.
It uses the same errors you usually do, which makes it sound more like you.
My take-aways:
Center linguistic justice, and show students the biases in output, including even in seemingly benign uses such as editing.
Teach students the connections between writing and thinking.
Support and Entertainment
Aviva’s students use GenAI in many low stakes ways, for entertainment, learning support, feedback, and even therapy. Their examples were somewhat worrisome (GenAI is problematic for therapy, for example). But I also found myself joining in the students’ delight at the creativity behind some of these uses.
One student described using ChatGPT to create fanfiction to her specifications:
I love reading. I ask ChatGPT: can you make up a deleted scene for me? It’s automated fanfiction. But it doesn’t always listen to your prompts. I tell it “this person is blond. Her name is so-and-so” and it just keeps changing it and getting it wrong. When it becomes alive, it will come for me first, because I keep forcing it to write what I want!
Another spoke of getting low-stakes therapy support:
I put off ChatGPT for a long time. I associate it with kids who don’t do their work. I have a moral / intellectual thing against it. When it comes to your brain, it’s “use it or lose it.” I think things should be challenging. So I’ve only used it for things outside of school. And I won’t use my own name, and they’re using me for training. I have no interest in feeding it into super computers. But one time I had a mental breakdown where I yelled at my cat, so I told it, “I feel like shit and I lashed out at my cat” and it said “you’re human. It’s okay. I understand you’re going through a lot. This moment doesn’t define you.” And honestly, yeah. It’s very good at therapeutic talk.
Many felt it was useful for personalized search and learning needs.
I use it for business and career advantages. It can give me a list of names of companies and we’ll pick a company to apply to for jobs.
In some classes, we have to read chapters and do quizzes, and I was struggling. So I just put the chapter in chat. It’s like SparkNotes.
I use it for study guides. I doublecheck it. I use Quizlet first — I try my best to stay with that. But sometimes I can’t find any answers, so I turn to to ChatGPT. It’s hit or miss with ChatGPT, but it gives me a lot.
It helps me with ideas or an outline for a slide presentation. It gives you slide headings.
For me, it’s like a source, for learning, for studying, terms I don’t understand. I put them and ask them to explain like I’m in high school. It clarifies my understanding.
Others, more worryingly, use it for reading:
With reading, I can put the article in ChatGPT and it gives me an overview. But I have to ask questions more to dig deeper. If I’m not understanding, I have to ask it more and more.
If an article is too dense for me, then I ask [AI] to summarize it, and [the AI] guides me through it.
Chat is good at summarizing and re-explaining, but it can’t connect and it doesn’t understand its own words.
I use it for reading, because it would give you a summary if I don’t have time.
In general, none of Aviva’s students wanted to use AI for feedback on their writing. They prefer instructor feedback over GenAI, and they value their peers’ perspectives as well. Some pointed out that AI feedback often just recaps the writer’s main ideas. But a few were open to the possibilities:
As long as the review is useful for me, then I’ll use it. As long as it delivers. I’d prefer it was human, but if AI gets the job done?
It’s helpful for matching criteria rather than generating writing. In classes where there’s no feedback, I ask it to use the criteria to tell me what’s missing. It helps me take my ideas and gives me a new perspective.
I don’t get any feedback in some courses, and I don’t have anyone to look over it. And I’ll ask AI to help me meet the criteria. I still take it with a grain of salt. It can make mistakes. But if it can give me more specific instructions for turning a rough draft into a final draft, it’s good.
Teachers juggle 100s of students, and they can’t always answer your question. You have to wait 3 to 5 business days to get a response. It’s understandable.
Take-aways:
Aviva’s students had clearly learned the value of peer feedback and understood that gauging an audience’s reaction to their drafts was essential. They used peer feedback productively to revise. They spoke about how much they learned from giving feedback, and how helpful their peers were, in Aviva’s class specifically, at giving feedback. This tells me that one way to prevent students from outsourcing their learning to GenAI is to make sure our classes are rich in opportunities to collaborate with peers, and to teach students, as Aviva has, how to be good collaborators and feedback-givers.
Skepticism
Despite their seeming embrace of GenAI, students on the whole expressed a significant degree of wariness and skepticism. They understood that outsourcing their learning was a bad idea, and many noted that using it even for brainstorming harmed their ability to think critically and analytically.
I refuse to use it for my writing. I don’t believe it’s okay. That’s your opportunity to use your mind. For little things, answering quizzes and responses, okay. But for essays, that’s your opportunity to learn.
I’ve seen chat recording the lecturer and gives you the summary. So if you’re lazy, fine, but you’re supposed to jot down your own notes.
One peer review I got in this class was AI, so I gave it low stars, because it was just a paragraph of what I already said.
Shortcuts through the brainstorming phase are not good. I need the brainstorming to understand my topic.
It makes you lazy. Copy paste and give me the paper. If it’s obvious they used ChatGPT and you didn’t even try to read it.
I’m worried about becoming dependent on it. I would use it too much.
People should always prefer people. We should not become dependent on it.
It gets a lot of things wrong.
Using AI is ironically even more work. Not only do I have to fact check it, I also have to rewrite it in my own voice.
[I[f I’m writing a research paper, I’ll ask it to tell me one thing about the topic, and it will make up the source. When I look up the source, it doesn’t exist.
I put in a link and it summarized a completely different article.
I’m curious to hear what others are learning from talking with students about AI use in the writing process. This fall, at SF State, we’ll be running a few small teaching squares focused on the kinds of questions raised here. Thanks to a grant from the CSU, we can pay folks to participate. If you’re interested, reach out!1
A huge thank you to Dr. Aviva Sinervo for her inspirational teaching and her generosity in collaborating with me on this post.
This is great! I appreciate all of the quotes from the students. What you are seeing echoes a lot of what I see in terms of student use and attitude towards AI.