Amid chatter about ChatGPT’s reportedly degrading performance, a new study found that recent open-sourced large language models (LLMs), including OpenAI’s GPT-3, perform “surprisingly better” on datasets released before the training data creation date than on datasets released after.
The University of California, Santa Cruz paper by Changmao Li and Jeffrey Flanigan suggests that it isn’t that ChatGPT’s performance is degrading because new tasks are different from what the models are trained on, but that we forget that these models, especially the groundbreaking GPT-3, performed astoundingly well because they were trained with massive amounts of data, with a vast amount of examples of what is asked of them, and not particularly because they understand the tasks per se.
As writing teacher and AI in education specialist Anna Mills puts it, it’s like “it has studied advance copies of lots of tests,” however, “when you give it new tests (tasks with no examples in its training data), it performs worse.”
This thread went viral, but I think it buries the lede. The lesson from this paper is that ChatGPT is doing something like cheating; its performance masks its lack of understanding. Often, it can perform tasks with few or no examples of the kind of thing the user is looking for… https://t.co/YS5wDxg7iL
— Anna Mills, annamillsoer.bsky.social, she/her (@EnglishOER) January 1, 2024
The paper emphasizes that LLMs use a retrieval-based approach that mimics intelligence, as tech entrepreneur Chomba Bupe points out.
In short: ChatGPT is a snapshot of the internet as it was in the past – as the internet changes ChatGPT becomes outdated in both knowledge & performance on useful tasks.
— Chomba Bupe (@ChombaBupe) January 1, 2024
OpenAI & anyone running LLMs must contend with that fact, they have to keep retraining new models.
OpenAI may be having trouble catching up. Claims of getting “lazy” (which many have equated to “degrading”) have been plaguing OpenAI’s paid model, GPT-4, in recent weeks.
There has been discussion if GPT-4 has become "lazy" recently. My anecdotal testing suggests it may be true.
— Ethan Mollick (@emollick) November 28, 2023
I repeated a sequence of old analyses I did with Code Interpreter. GPT-4 still knows what to do, but keeps telling me to do the work. One step is now many & some are odd. pic.twitter.com/OhGAMtd3Zq
The company, through its X account, explains that training a chat model “is not a clean industrial process,” and that it’s “less like updating a website with a new feature and more an artisanal multi-person effort to plan, create, and evaluate a new chat model with new behavior!”
we’re always striving to make our models more capable and useful for everybody across millions of use cases. so please keep the feedback coming! it helps us stay on top of this dynamic evaluation problem 🙏
— ChatGPT (@ChatGPTapp) December 9, 2023
Information for this story was found via X, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.