Monday, February 2, 2026

New Study Reminds Us That ChatGPT Does Not Really *Understand* What You Want It To Do

Amid chatter about ChatGPT’s reportedly degrading performance, a new study found that recent open-sourced large language models (LLMs), including OpenAI’s GPT-3, perform “surprisingly better” on datasets released before the training data creation date than on datasets released after. 

The University of California, Santa Cruz paper by Changmao Li and Jeffrey Flanigan suggests that it isn’t that ChatGPT’s performance is degrading because new tasks are different from what the models are trained on, but that we forget that these models, especially the groundbreaking GPT-3, performed astoundingly well because they were trained with massive amounts of data, with a vast amount of examples of what is asked of them, and not particularly because they understand the tasks per se. 

As writing teacher and AI in education specialist Anna Mills puts it, it’s like “it has studied advance copies of lots of tests,” however, “when you give it new tests (tasks with no examples in its training data), it performs worse.”

The paper emphasizes that LLMs use a retrieval-based approach that mimics intelligence, as tech entrepreneur Chomba Bupe points out.

OpenAI may be having trouble catching up. Claims of getting “lazy” (which many have equated to “degrading”) have been plaguing OpenAI’s paid model, GPT-4, in recent weeks. 

The company, through its X account, explains that training a chat model “is not a clean industrial process,” and that it’s “less like updating a website with a new feature and more an artisanal multi-person effort to plan, create, and evaluate a new chat model with new behavior!”


Information for this story was found via X, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Video Articles

Silver Is a Wild Animal, Gold Heads for $6,000 in 2026 | Craig Hemke

Is This the End of the Gold and Silver Rally? | Peter Grandich

Why Gold And Silver Stay High Even After Rate Cuts | Todd Bubba Horwitz

Recommended

Total Metals Launches 5,500 Metre Drill Program At ElectroLode Property

Mercado Minerals Launches Two Phase Geophysical Program At Copalito Project

Related News

This Flirting App Is Making $190K A Month from Repackaging ChatGPT

Is making money off a ChatGPT wrapper unethical or just clever? What if it’s $190,000...

Thursday, May 9, 2024, 08:09:36 AM

New York Times v. Microsoft, OpenAI: The Biggest Argument to Copyright Issues of Generative AI

The New York Times has filed a federal lawsuit against OpenAI and Microsoft, accusing them...

Thursday, December 28, 2023, 11:19:00 AM

OpenAI Deletes Evidence in New York Times Copyright Lawsuit

OpenAI, the artificial intelligence company behind ChatGPT, has inadvertently erased crucial evidence in its ongoing...

Friday, November 22, 2024, 03:08:00 PM

Musk Clashes with Altman Over Trump’s $500B AI Project

Elon Musk challenged the financial viability of a major White House AI initiative on Tuesday,...

Saturday, January 25, 2025, 09:31:00 AM

China Arrests Man Over Release of ChatGPT-Generated Fake News

Today in that’s-probably-not-how-you-use-artificial-intelligence, a man in China has been arrested for reportedly using ChatGPT to...

Thursday, May 11, 2023, 06:14:00 AM