I am a computer science student at Calvin who recently wrote a statement about the use of Large Language Models (LLMs) in my classes. The companies behind this tech have claimed the more general term “AI” for their products. For clarity, I will not be doing the same.
Generating code is easily one of the best ways to use LLMs, and many of the most important critiques of using LLMs don’t apply here. Still, I am uncomfortable using these tools for computer science classes for several reasons.
Firstly, and least importantly, I do not enjoy using these tools very much. I prefer to own and understand every part of my code, and I deeply enjoy the minutiae of discovering how to do something specific. When using an LLM to code, more time is spent debugging than crafting the software itself. AlthoughI enjoy debugging as well, it doesn’t foster craftsmanship and excellence, which I find core to the experience of vocation. I would always rather have a genuine sense of ownership over code than some average — or even perfect — solution I barely grasp.
Furthermore, LLMs are built on uncited work. As students, academics, business people and people with integrity, we encourage each other to trace the origins of our ideas. We cite our sources and keep good records so that any investigation into the factuality, legality or history of what we do can be done in good faith. Generative statistical models are incapable of this kind of tracing. Their answers are informed by the sum total of its training material (which is not cited). They are capable of modeling and reproducing which sources go with which types of information, but not what actually informs their answers. Even if an LLM says the same thing as its supposed source does, it is being dishonest, since the actual origin of that information as the model gives it is complex and impossible to investigate. In fact, everything the model is trained on affects the nature of that response. Citing the LLM itself cannot replicate the function of the citation web it severs, because it is non-deterministic and rapidly changing.
As a consequence, these tools disrupt learning communities. Because of the above, hardworking developers who have shared their knowledge publicly now risk their work being buried and ignored. Their generous gift of time and knowledge passes into the background, with no opportunity for gratitude or follow-up. While it is more efficient for the developer to ask a model than find and parse a blog post or forum like Stackoverflow, they are blocked from any awareness of their fellow programmers. Having developed some larger projects by myself, I am indebted to genuine communities of developers on the web dedicated to specific topics. Reliance on LLMs means nothing short of the complete destruction of these communities, which is disrespectful to the hard work of expert developers and a disservice to amateurs. We learn more about programming from the hard work of synthesizing a solution from different sources than from any amount of analysis or debugging a generated solution for our specific goal.
LLMs are better at rhetoric and clarity than factuality, which harms in-person communities of learners as well, by giving people an easy alternative to learning directly from someone. This gives them less opportunities to foster the skills of asking good questions and understanding the thought process of others. The cost is in-person connection and skills at a time when these connections are more needed than ever, even in the results-oriented world of software.
If asked to use an LLM for class, I will do so and cite my prompts, despite my belief that citation of an unpredictable tool with no expertise is dubious and practically worthless. However, I promise to always prioritize the conscious spread of real information, to put in the work to learn directly from others and to work on my skills in communicating to them. I am angry at the normalization of this technology and think it reflects a callous disregard for society and culture. I promise never to work on tools like these. Instead, I intend for my skills and talents to contribute to a world where tech is a respectable tool that freely facilitates community and integrity, and people rely on it no more than they already do. I challenge my fellow developers to hold themselves to the same standards. Anything less would be wholly reprehensible.