Calvin University's official student newspaper since 1907

Calvin University Chimes

Since 1907
Calvin University's official student newspaper since 1907

Calvin University Chimes

Calvin University's official student newspaper since 1907

Calvin University Chimes

What’s next for AI, and what that means for education

Let there be light

A little over a year ago, on Nov. 30, 2022, Sam Altman –– a somewhat-unknown tech entrepreneur and CEO of OpenAI –– tweeted, “Today we launched ChatGPT. Try talking with it here,” followed by a link to the product, still in its infancy.

It is unknown whether Altman anticipated the AI-generated videos of Will Smith eating spaghetti, audio clips of Joe Biden promising to bring back the Lego Power Miners franchise, images of Taylor Swift dancing with Kansas City Chiefs fans, AI-powered girlfriends that steal your credentials or a wave of deepfake telemarketing scams that would follow.

With that, Altman had effectively let the AI genie out of the bottle.

Even if OpenAI pulled the plug on ChatGPT right this very instance, open-source models like LLaMA 2 and BLOOM have almost matched the performance of proprietary closed-source models like GPT-4, and can run locally on your own machine. With this in mind, it is safe to say that AI is here to stay.

What’s been happening in AI-land recently?

On Thursday, February 15 –– not even a week ago –– OpenAI published a preview of Sora, a text-to-video model capable of generating minute-long videos from a prompt. In the preview, OpenAI included a few example videos generated by the model. And boy, are they compelling.

The videos are realistic enough to fool the untrained eye. If you’re not looking closely at the number of fingers on a hand, the direction of the breeze blowing the candles on a birthday cake, or the walking motion of subjects in the background, you’d mistakenly think the videos are genuine.

While the videos have obviously been hand-picked by OpenAI, the amount of progress they’ve made is staggering. If this is merely the tech in its infancy, it’s easy to imagine a future where advertisements or short films are composed entirely of AI-generated videos.

Recent breakthroughs aren’t limited to just OpenAI. Google’s Bard chatbot was recently rebranded as Gemini, and is armed with full access to the Google suite –– like Google Flights, Google Maps, and Gmail –– allowing it to do things ChatGPT can’t, like finding the cheapest flight to LAX on Spirit airlines. 

What does the future hold?

Please be warned that the following is speculative in nature, and a reflection of my own beliefs.

As we’ve seen, the biggest AI breakthroughs will happen with the largest, most resourceful enterprises. Entities with access to the most data, like Microsoft and Google, will always be first. The large, multi-billion dollar GPU farms required to train AI language models simply aren’t accessible to small startups. Then, the innovations will trickle down to smaller entities, startups and the average joe-schmoe with an Nvidia graphics card.

AI language models will specialize. Currently, lots of integrated “AI-assistants” –– like chatbots used by your local Chevy dealer –– are simply API calls to a general-purpose text model. While this is easy to make, it’s inefficient. For example, if you run a gluten-free cooking website, using a general-purpose model to only answer questions about gluten-free recipes is a massive waste of resources –– and won’t guarantee the best response.

So, we’ll see more application-specific models in the future rather than general-purpose ones wearing a hat. 

Rather, it’s much more efficient to train your model exclusively on conversations about gluten-free cooking rather than train it from the entirety of accessible data on the internet, as is the case with general-purpose models. This will give your model the advantage of depth over breadth, allowing it to become very good at one specific thing rather than being a little good at everything.

So, we’ll see more application-specific models in the future rather than general-purpose ones wearing a hat. 

ChatGPT is good at some things and bad at others. It has a specific style of writing that’s easy to spot once you’ve seen enough of it. Once AI-generated text and images have proliferated enough, it’ll be a lot easier for the average person to notice AI-generated writing.

After the wave of AI optimism (“AI can do anything!”) has passed –– and once people have had enough opportunities to brush up against its limits –– people will recognize AI’s strengths and weaknesses.

You can’t train AI models on AI-generated data. Well, technically, you can –– but that would create issues. As more and more AI-generated text, images and other media appear on the internet, the more they will taint potential training data for future AI-language models that are trained on web-scraped data.

Additionally, data-poisoning tools like Nightshade –– developed by the University of Chicago –– help protect artists from their artwork being used to train AI models. It works by embedding small details in images that ultimately confuse and mislead AI models during the training process. 

With these two things in mind, it’s not hard to imagine that big corporations are scrambling to archive as much internet data as possible before it’s filled with too much AI-generated or Nightshade-protected content. A large trove of untainted general-purpose training data might be of great value in the future.

Either that, or they’re betting on the possibility that they’ll have technologies to filter out AI-generated or Nightshade-protected content in time. It’ll be interesting to see a counter-Nightshade, then a counter-counter-Nightshade, and so on. It’ll be an arms-race –– like the continuous competition between CAPTCHAs and CAPTCHA-solving bots.

What’s in store for education?

AI has massive potential for learning –– and it would be foolish for academic institutions to ignore this. From adjusting curriculums in accordance with the needs of neurodivergent students, to detecting cheating, to automatically grading assignments and giving feedback, to clarifying and explaining difficult concepts, AI certainly has potential in education –– and that’s only scratching the surface.

As previous Chimes reporting points out, curriculums everywhere are mostly rejecting the use of AI –– and I believe that’s for the better. With how current curriculums are set up, students are incentivized to use tools like ChatGPT to cheat on assignments. 

Instead of this, could we build better curriculums that wholeheartedly embrace AI –– viewing it as a tool for learning rather than an alternative to learning? Absolutely.

In that same Chimes article, Calvin’s Keith Vander Linden, a computer science professor, explains how his software development class encouraged the use of AI to help with coding applications. In turn, Vander Linden expects more from his students’ projects.

At the same time, we can create environments where the use of AI is intentionally barred. As Vander Linden details in the aforementioned article, reverting to physical on-paper tests is a great way to ensure ChatGPT won’t be used.

Curriculums have the power to both allow and disallow the use of AI –– and the best curriculums will use both, barring its use when necessary.

In the 1960s and 70s, pocket calculators were starting to become widespread. The implications were obvious for math curriculums at the time. In response, a band of math teachers held a protest against its use in the classroom.

The rest is history, of course. Today, calculators are an integral part of any math curriculum. It’s hard to think of math education today without them.

I believe the same will be true about generative AI. It has vast, limitless potential for accelerating learning –– and we’re only at the tip of the iceberg. Once curriculums have adapted to this new technological change, we can cause education to be more accessible to people with learning disabilities, a language barrier, those who can’t afford higher education or those with differing cultural backgrounds –– propelling education further than it’s ever gone before.

Leave a Comment
More to Discover

Comments (0)

All Calvin University Chimes Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *