Many on our campus were taken by surprise at the announcement of the “landmark” partnership between the CSU and several tech companies in Silicon Valley.
From the CSU’s press release:
The partnership positions the CSU to be “the first and largest AI-powered university” in the nation; and a “global leader among higher education systems in the impactful, responsible and equitable adoption of artificial intelligence”;
It will “elevate our students’ educational experience,” “empower our faculty’s teaching and research,” and provide a future workforce for California’s “AI-driven economy”;
As part of the partnership, students, faculty and staff will have an 18-month free subscription to ChatGPT;
The cost to the CSU: nearly $17 million.
The announcement raised many questions from faculty across the system.
It also caught the attention of Marc Watkins, who writes about AI and education for the Chronicle of Higher Ed. Among his questions: Where’s the money going? Do these tools live up to the hype? Are partnerships like these, as well as other “lean-in” university initiatives, performative or strategic? Given that the CSU is also facing severe budget shortfalls, Watkins writes: “How much of [the budget shortfall] the state expects to be offset by the promise of generative AI isn’t known, but education is like any other industry in one regard—if they find a cheaper way of producing a product, then they will adopt it quickly. Unfortunately, that product is learning.”
For me, Watkins’ toughest question is also the most personal: what about the overlooked human costs of AI at the individual faculty and classroom level?
Watkins worries that university partnerships like the CSU’s offer “an illusion of control” that does not match the reality on the ground. On the ground, faculty are scrambling to manage the messy realities of teaching in a time of technology disruption. Here’s Watkins: “Institutions are buying into the idea that if they adopt AI at scale, they can manage how students use it, integrate it seamlessly into teaching and learning, and somehow future-proof education. But the reality is much messier.”
In my own classrooms, and in my conversations with faculty, I can attest to that last sentence: the reality is messy for sure.
Some faculty on our campus are embracing AI tools in their courses, redesigning assignments and activities to challenge students in new ways. But when it comes to teaching students to use the tools either in and outside of the classroom, these same faculty worry. They note that we haven’t spent enough time thinking through issues of learning loss and learning gaps. What skills and intellectual habits do students need to have firmly under their belts in order to use AI responsibly? How can we ensure that they continue to learn key skills when AI shortcuts are constantly marketed to them?
Watkins is blunt: most institutions simply do not have the resources “to establish a curriculum about applied AI, nor do we have a consensus about how to teach generative AI skills ethically in ways that preserve and enhance our existing skills instead of threatening to atrophy those skills. It will take years of trial and error to integrate AI effectively in our disciplines. That’s assuming the technology will pause for a time. It won’t. Which leaves us in a constant state of trying to adapt. So, why are we investing millions in greater access to tools no one has the bandwidth or resources to learn or integrate?”
For just a small taste of the bandwidth faculty are going to need to reimagine their teaching, read Stephanie Kratz’ post in “The Important Work.” Kratz is frustrated that her first-year writing students may now easily access tools that undercut the rhetorical thinking she is trying to teach. Lacking a concrete way to prevent student use of AI in their writing, she lands on the same rickety advice we’ve given at CEETL when it comes to AI detection: We have to trust students while simultaneously pushing them to critically interrogate the impact of AI tools on their development as writers and students.
Like Kratz, many of us feel like we are left to tinker our way toward classroom best practices. The CSU partnership announcement raises the possibility that we could have a more coordinated approach in making curricular changes (and to be fair, this overview of AI initiatives across the CSU does point in that more coordinated direction).
But Watkins says we don’t yet have the research to tell us how we should revise the curriculum to incorporate AI tools (we don’t know how or whether or when these tools foster learning). And despite the 17 million dollar price tag on our new partnership, we don’t have the resources to develop a curriculum that uses AI to enhance existing skills instead of threatening them.
At CEETL, we have committed to a faculty-driven, ground-up, community-sense-making approach to discovering best practices for integrating AI into our curriculum, and we are centering critical, ethical perspectives on AI, an approach that dovetails with SF State’s social justice, activist ethos and history.
Admittedly, though, the community sense-making approach does not provide quicky answers for faculty who are swamped right now by the “messy reality” of their classrooms and the need to adapt to an ever evolving tech landscape. In my first-year writing classes, students express a curious contradiction: on the one hand, they tell me that their AI use has evolved past the dreaded Chat GPT cut-and-paste. For example, they “train the bot” on their own writing to ensure that their voice surfaces in AI-assisted assignments. They use bots to help them when they’re stuck, to provide explanations of concepts, and to fill in the details in their essays when they run out of time. They use AI to smooth over “bumpy” sentences.
But on the other hand, students also tell me that even with AI tools, they struggle with “real” writing: they find it difficult to read deeply in preparation for writing; they struggle to shoehorn their ideas into academic genres which, to them, leave no room for their own thinking; they don’t know how to put their ideas alongside or in dialogue with, academic texts, and they struggle to use AI-generated writing because its authoritative-sounding output overtakes their ideas. If the bot changes their meaning as it edits their “bumpy” sentences, students don’t always have the rhetorical skills to know how to change it back.
Watkins concludes: “Generative AI doesn’t have to be sold as the future of education to be a useful tool. That sort of hype is just the latest tech industry hollow sales pitch, using the familiar language of equity and innovation. If universities don’t start asking harder questions about AI’s real value, they’ll keep spending money they don’t have on tools their students don’t need—while real educational challenges go unresolved.”
The Chancellor’s Office press release on our new AI partnership concludes with Ed Clark, CSU’s Chief Information Officer. “At the CSU, we have two imperatives: to equip our students with the skills to leverage these powerful tools, and to transform our own institutional practices through AI to better serve the largest public university system in the nation.”
I don’t want to sound like a luddite (although I am looking forward to reading this revisionist history of the Luddite movement).
But my money is with Watkins, rather than Clark right now. My students are not yet equipped with “the skills to leverage” AI tools, and pedagogical efforts to build those skills is, ironically, undermined by easy access to the tools themselves.
So that’s one example of the messy reality Watkins describes. As for the imperative to “transform our own institutional practices through AI” — well, I’ll be curious to hear from you all about what you think this statement might portend.
I hope you’ll join the conversation around these issues at CEETL. CEETL’s most recent newsletter includes Anoshua’s summary of the CSU partnership and CEETL’s AI initiatives. CEETL is hosting their second annual AI symposium on teaching and learning — “Embracing and Resisting AI: Best Practices in the Classroom” — on May 2. If you want to join us in these conversations, please leave a comment here or reach out (jtrainor@sfsu.edu)!
Thank you for sharing your perspective about the CSU partnership, Jen! This is a messy space indeed.