Tuesday, February 17, 2026

New Study Reminds Us That ChatGPT Does Not Really *Understand* What You Want It To Do

Amid chatter about ChatGPT’s reportedly degrading performance, a new study found that recent open-sourced large language models (LLMs), including OpenAI’s GPT-3, perform “surprisingly better” on datasets released before the training data creation date than on datasets released after. 

The University of California, Santa Cruz paper by Changmao Li and Jeffrey Flanigan suggests that it isn’t that ChatGPT’s performance is degrading because new tasks are different from what the models are trained on, but that we forget that these models, especially the groundbreaking GPT-3, performed astoundingly well because they were trained with massive amounts of data, with a vast amount of examples of what is asked of them, and not particularly because they understand the tasks per se. 

As writing teacher and AI in education specialist Anna Mills puts it, it’s like “it has studied advance copies of lots of tests,” however, “when you give it new tests (tasks with no examples in its training data), it performs worse.”

The paper emphasizes that LLMs use a retrieval-based approach that mimics intelligence, as tech entrepreneur Chomba Bupe points out.

OpenAI may be having trouble catching up. Claims of getting “lazy” (which many have equated to “degrading”) have been plaguing OpenAI’s paid model, GPT-4, in recent weeks. 

The company, through its X account, explains that training a chat model “is not a clean industrial process,” and that it’s “less like updating a website with a new feature and more an artisanal multi-person effort to plan, create, and evaluate a new chat model with new behavior!”


Information for this story was found via X, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Video Articles

Eldorado Gold: The $3.8 Billion Foran Mining Acquisition

Silver Tiger’s $2.35B Silver Blueprint: Two Mines, One Perfect Metals Market

Gold Is At Records. Barrick Mining Is Printing Cash. The Stock Still Fell. | Q4 Earnings

Recommended

Canadian Copper Plans 2,500 Metre Drill Program For 2026

Mercado Receives Permits For Planned 3,000 Metre Drill Program At Copalito

Related News

ChatGPT Can Impact 80% of US Jobs — Is it After Yours?

A new study looked into the early impact of large language models (LLMs) such as...

Thursday, March 30, 2023, 05:13:00 PM

Apple-OpenAI Deal Apparently Doesn’t Cost A Thing For Apple

In a move that has captured the tech industry’s attention, Apple (NASDAQ: AAPL) and OpenAI...

Friday, June 14, 2024, 12:59:00 PM

Microsoft Partners with OpenAI for ChatGPT-Powered Version of Bing

Microsoft Corp (NASDAQ: MSFT) is stepping up to challenge Alphabet’s (NASDAQ: GOOGL) Google. The company...

Wednesday, January 4, 2023, 07:20:00 AM

Sam Altman Wants To Raise Trillions For The Future Of AI

OpenAI CEO Sam Altman wants to transform the global semiconductor industry. The catch: he needs...

Friday, February 9, 2024, 10:38:00 AM

Stanford Student Finds That Academics Are Abusing ChatGPT

Software developer and student at Stanford University Andrew Gao has spotted something peculiar about recently...

Tuesday, August 15, 2023, 08:15:00 AM