Sunday, March 22, 2026

Latest

New Study Reminds Us That ChatGPT Does Not Really *Understand* What You Want It To Do

Amid chatter about ChatGPT’s reportedly degrading performance, a new study found that recent open-sourced large language models (LLMs), including OpenAI’s GPT-3, perform “surprisingly better” on datasets released before the training data creation date than on datasets released after. 

The University of California, Santa Cruz paper by Changmao Li and Jeffrey Flanigan suggests that it isn’t that ChatGPT’s performance is degrading because new tasks are different from what the models are trained on, but that we forget that these models, especially the groundbreaking GPT-3, performed astoundingly well because they were trained with massive amounts of data, with a vast amount of examples of what is asked of them, and not particularly because they understand the tasks per se. 

As writing teacher and AI in education specialist Anna Mills puts it, it’s like “it has studied advance copies of lots of tests,” however, “when you give it new tests (tasks with no examples in its training data), it performs worse.”

The paper emphasizes that LLMs use a retrieval-based approach that mimics intelligence, as tech entrepreneur Chomba Bupe points out.

OpenAI may be having trouble catching up. Claims of getting “lazy” (which many have equated to “degrading”) have been plaguing OpenAI’s paid model, GPT-4, in recent weeks. 

The company, through its X account, explains that training a chat model “is not a clean industrial process,” and that it’s “less like updating a website with a new feature and more an artisanal multi-person effort to plan, create, and evaluate a new chat model with new behavior!”


Information for this story was found via X, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Video Articles

The $30,000 Gold Case Just Got Stronger | Simon Marcotte

Why Silver’s Move Is ‘Scary’ to Some Miners | Frank Basa

Are Commodities Entering a Generational Cycle? | Terry Lynch

Recommended

CBS News Cuts Staff and Shuts Radio Network in Early Bari Weiss Era

Steadright Closes Out Financing, Raising $1.6 Million For Moroccan Strategy

Related News

OpenAI Seeks Federal Loan Guarantees For Chips

OpenAI asked Washington for help guaranteeing financing for AI chips and data centers, with CFO...

Friday, November 7, 2025, 07:39:00 AM

ChatGPT Ads Launch as OpenAI Burns Through Billions

OpenAI announced last week that it will begin testing advertisements in ChatGPT’s free tier and...

Tuesday, January 20, 2026, 12:07:00 PM

OpenAI in Talks for Massive Funding Round, Valuation Could Top $100 Billion

OpenAI, the company behind the popular AI chatbot ChatGPT, is reportedly in discussions to raise...

Friday, August 30, 2024, 02:54:00 PM

OpenAI Is Now Working With Pentagon, Removes Policy Prohibition Of Using AI In Military And Warfare

OpenAI has announced a collaboration with the Pentagon on various software projects, with a focus...

Wednesday, January 17, 2024, 10:47:00 AM

OpenAI’s Soaring Revenue Shadowed by Billion-Dollar Losses, NYT Report Reveals

Artificial intelligence giant OpenAI is experiencing rapid revenue growth but facing significant financial challenges, according...

Monday, September 30, 2024, 10:56:00 AM