Calvin professors continue the conversation about the ethics of AI after the class action lawsuit Bartz v. Anthropic settled for $1.5 billion this summer.
This past summer, tech giant Anthropic came under fire for using pirated books retrieved from an online source, LibGen, to train AI models. LibGen, short for “Library Genesis,” is a website which allows free sharing of pirated books and publications online, including those published by Calvin faculty.
Three authors filed a class action lawsuit against the company in August of 2024. The case, Bartz v. Anthropic, reached California district courts in June, where Judge William Alsup ruled that Anthropic’s use of pirated works off of LibGen violated fair use.
“I knew that much of my published scholarly work, including entire books, was among those large pirated datasets,” said Kevin Timpe, chair of the Philosophy department at Calvin. Upon hearing of the lawsuit several months ago, Timpe decided to register as a complainant. Shortly after, he mentioned the case to others in his discipline — both those at Calvin and at other institutions.
Since the June ruling, a settlement agreement has been made. Anthropic will pay 500,000 authors $3,000 each, totaling a settlement of 1.5 billion, according to court documents. The law firm JND Legal Administration describes Bartz v. Anthropic as “the largest copyright class action settlement in history.”
Timpe, alongside his colleagues, is now waiting for a response, “but to the best of my knowledge,” Timpe recalled, “none of them have heard back about the settlement.”
With millions of pirated books among LibGen’s databases, it can be hard to know which works will be considered for the settlement. JND recently released a “Works List Lookup” which seeks to solve this problem. Accessible to the public, the page details which works qualify as pirated and will receive compensation. The database includes a significant number of publications by Calvin professors, including those in the religion, philosophy, computer science, and social work departments.
This lawsuit raises new discussion in the larger debate about ethical AI use. Judge Alsup’s ruling in Bartz v. Anthropic relied on the premise that the use of humanly authored books in training AI is permissible, but the acquisition of these books via piracy is not. In other words, AI companies can train their models using modern author’s works if these authors are aware and compensated.
Timpe saw potential for harm in this conclusion. AI trained on humanly authored books doesn’t produce summaries of these books, he explained. Rather, they give people “predictions about what a summary of a text would be.” In this way, those who use AI are discouraged from reading primary texts, and lose the skills gained from doing so. “I think that long term, this is likely to be detrimental to the educational project,” Timpe added.
To Timpe, Calvin’s motto of thinking deeply, acting justly and living wholeheartedly looks like upholding respectable AI use. He encourages students to consider the long-term impacts of avoiding deep readings. As faculty wait to hear back about the settlement, this case has shown that the implications of AI are only just beginning to unfold.