As generative AI becomes more advanced, Calvin professors – like their colleagues across the country – have had to figure out how the emerging technology fits into their classroom and teaching.
The new technology – and its related predecessors – has changed how some Calvin professors teach. For professor of computer science Ken Arnold, LLMs have become a useful tool.
Arnold uses Claude – an generative LLM introduced in 2023 – to evaluate in-class assignments submitted by students. These summaries can then provide the basis for in-class discussion and help Arnold predict “what clarifying questions will students have about this assignment?” He also uses the program to generate unit objectives and example quiz questions — not as a replacement of his teaching materials, but to identify gaps in them.
Generative AI programs rose to prominence in 2022, after OpenAI released an early version of ChatGPT. This program, which was based on the Large Language Models (LLMs) used in older technologies such as predictive text programs, allowed users to generate texts almost instantly from prompts typed into the program.
Arnold is currently integrating another AI called CodeHelp into classes. CodeHelp is an ‘automated teaching assistant for coding and computer science’. But where other programs function as an AI autocomplete tool, CodeHelp uses Open AI’s GPT-4o model to give students feedback on their code without writing it for them.
Arnold said that he’s “hopeful that AI can provide additional support needed to students with less programming background, making CS more inclusive.” However, he’s also “concerned that the allure of getting easy answers will prove too tempting. I’m not primarily thinking about academic integrity, but about skill development,” said Arnold.
Working to prevent AI use
The rise of generative AI has led to changes in a number of other classrooms as well. In psychology professor Marjorie Gunnoe’s classes, handwritten essays have replaced take-home essays, and her expectations for student’s work have gone down. Students who may rejoice at not having to write a complete paper are none the wiser about the valuable educational nuances they are losing. “I think students are now being shorted,” Gunnoe said, “in that they are not getting as much mentoring with respect to the craft of writing.”
She is spending less time mentoring students and giving them necessary feedback, and increasingly “building cases” for AI-enabled academic dishonesty to Calvin’s Office of Student Support, Accountability, and Restoration (OSSAR), who then decides whether disciplinary action is warranted.
According to Gunnoe, students underestimate how easy it is to identify AI-generated content. “When I receive a paper from a student that strings together sophisticated-sounding, individual sentences that do not yield a cohesive whole, I suspect AI,” said Gunnoe. “Then I spend a lot of time documenting the evidence for AI use and convincing the student that they should admit that their paper was AI-generated.”
The alternative to confessing, according to Gunnoe, is triggering a formal conduct hearing “which they will likely lose, and just wastes more time.”
When Gunnoe, and many other professors, know that a paper has been AI-generated, they give it a poor grade, regardless of how polished it is. Students essentially shoot themselves in the foot when they resort to AI. “I think students don’t understand that most profs will give a higher grade to a poorly written paper that demonstrates student effort than a paper that sounds like it’s AI written,” said Gunnoe.
Kevin Timpe, a professor of philosophy with his own reservations about use of AI, said that efforts by professors to detect and prevent AI use in the classroom have resulted in a “considerable drain on their time and energies.”
AI as an emerging technology
Craig Hanson, professor of art history, acknowledges that we are only just beginning to use AI in practical situations, and the norms and expectations we will have around AI use several decades from now are only just unraveling. He wrestles with this uncertainty in his role as an educator. “The challenge in the classroom, for me, is acknowledging the significance of the moment while still pursuing particular learning objectives,” said Hanson.
Honoring this balance between the new technology and students’ education makes academic transparency and honest acknowledgment non-negotiable. Hanson’s hope is for students to use AI as a tool that aids in their development as scholars rather than supplanting their agency. The exact way to do so is still unclear. “Using Perplexity (instead of Google) for exploratory research. . . brings AI into the process from the very beginning. I’m fine with that, but it needs to be acknowledged,” Hanson said.
In the classroom, enforcing this balance will look like a lot more handwritten in-class assignments, as Gunnoe is already doing. This will probably change the way students learn, but Hanson is cautious about declaring this a net negative or positive. “Writing in class is necessarily less refined – at least for most people –, but it also is a valuable mode of writing, and more practice/experience with the mode would – one hopes – also build useful skills — in addition to serving as an assessment mechanism,” said Hanson. “In some ways, it’s the difference between grading a first draft and grading a second or third draft.”
As an art historian, Hanson thinks that AI doesn’t change many fundamental questions artists and connoisseurs have been grappling with for a long time, such as how we view a painter drawing from a photograph as opposed to drawing from memory. “If, on the other hand, a maker is looking to AI for something like ‘artistic agency’ (in the sense of AI developing its own artistic practice and body of work) that would, it seems to me, be new,” said Hanson. “AI is the ‘subconscious’ not of an individual but of whatever the AI has been trained on.”
AI as a tool?
Many people make the comparison between LLMs like ChatGPT and calculators in that they’re both tools that can be useful for different tasks. While this is true to some extent, the calculator analogy falls short for Timpe.
Timpe concedes that AI can be used appropriately. He gives the example of using it to generate the first draft of an abstract for a chapter he’d written. That draft then underwent intense editing to make it an adequate reflection of his goals as a communicator, and voice as an author. However, there are pressures to misuse AI, and the incentives currently add up in such a way that most people will default to using AI in inappropriate ways.
Here, Timpe echoes Sabrina Little, a professor at Christopher Newport University. In an article published in Psychology Today titled, “Why Students Should Resist Using ChatGPT”, Little argues that learning is a transformative experience, and by outsourcing your thinking to an LLM, you only cheat yourself out of your own development. Writing, and by extension thinking, is a painful and consequently character-building process. It forces one to exercise perseverance and confront the initial shallowness of one’s thinking.
“AI is often used in ways that’s not just an aid to our thinking and writing process, but actually outsources our thinking and writing in problematic ways,” said Timpe.