New Study Reminds Us That ChatGPT Does Not Really *Understand* What You Want It To Do

Amid chatter about ChatGPT’s reportedly degrading performance, a new study found that recent open-sourced large language models (LLMs), including OpenAI’s GPT-3, perform “surprisingly better” on datasets released before the training data creation date than on datasets released after. 

The University of California, Santa Cruz paper by Changmao Li and Jeffrey Flanigan suggests that it isn’t that ChatGPT’s performance is degrading because new tasks are different from what the models are trained on, but that we forget that these models, especially the groundbreaking GPT-3, performed astoundingly well because they were trained with massive amounts of data, with a vast amount of examples of what is asked of them, and not particularly because they understand the tasks per se. 

As writing teacher and AI in education specialist Anna Mills puts it, it’s like “it has studied advance copies of lots of tests,” however, “when you give it new tests (tasks with no examples in its training data), it performs worse.”

The paper emphasizes that LLMs use a retrieval-based approach that mimics intelligence, as tech entrepreneur Chomba Bupe points out.

OpenAI may be having trouble catching up. Claims of getting “lazy” (which many have equated to “degrading”) have been plaguing OpenAI’s paid model, GPT-4, in recent weeks. 

The company, through its X account, explains that training a chat model “is not a clean industrial process,” and that it’s “less like updating a website with a new feature and more an artisanal multi-person effort to plan, create, and evaluate a new chat model with new behavior!”


Information for this story was found via X, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Video Articles

Why Silver Needs to Slow Down to Go Higher | Dan Dickson – Endeavour Silver

Silver Dips Are Getting Bought, This Is How Breakouts Start | John Feneck

Why $100 Silver Right Now Would Be a Problem | Keith Neumeyer – First Majestic

Recommended

NexGen Launches 42,000 Metre Drill Program At PCE While Expanding Mineralized Footprint

First Majestic Hits 2025 Guidance, Producing 31.1 Million Silver Equivalent Ounces, Increases Dividend

Related News

Nvidia to Invest Up to $100 Billion in OpenAI Under Massive Infrastructure Partnership

Nvidia Corp. (Nasdaq: NVDA) announced Monday it intends to invest up to $100 billion in...

Tuesday, September 23, 2025, 12:55:00 PM

Are Humans Doomed to Destroy ChatGPT?

Humanity has always been afraid of artificial intelligence. We recognize its ability to fundamentally transform...

Monday, February 6, 2023, 02:17:00 PM

OpenAI’s Altman and Apple’s Ex-Design Chief Ive Collaborate on Mysterious AI Device

OpenAI CEO Sam Altman has confirmed his collaboration with former Apple (Nasdaq: AAPL) design chief...

Tuesday, September 24, 2024, 08:40:14 AM

Was OpenAI Brouhaha Aimed To Derail AI Capability To Complete Grade School Math?

It is being reported that a group of staff researchers at OpenAI conveyed their concerns...

Thursday, November 23, 2023, 12:48:00 PM

ChatGPT Can Impact 80% of US Jobs — Is it After Yours?

A new study looked into the early impact of large language models (LLMs) such as...

Thursday, March 30, 2023, 05:13:00 PM