New Study Reminds Us That ChatGPT Does Not Really *Understand* What You Want It To Do

Amid chatter about ChatGPT’s reportedly degrading performance, a new study found that recent open-sourced large language models (LLMs), including OpenAI’s GPT-3, perform “surprisingly better” on datasets released before the training data creation date than on datasets released after. 

The University of California, Santa Cruz paper by Changmao Li and Jeffrey Flanigan suggests that it isn’t that ChatGPT’s performance is degrading because new tasks are different from what the models are trained on, but that we forget that these models, especially the groundbreaking GPT-3, performed astoundingly well because they were trained with massive amounts of data, with a vast amount of examples of what is asked of them, and not particularly because they understand the tasks per se. 

As writing teacher and AI in education specialist Anna Mills puts it, it’s like “it has studied advance copies of lots of tests,” however, “when you give it new tests (tasks with no examples in its training data), it performs worse.”

The paper emphasizes that LLMs use a retrieval-based approach that mimics intelligence, as tech entrepreneur Chomba Bupe points out.

OpenAI may be having trouble catching up. Claims of getting “lazy” (which many have equated to “degrading”) have been plaguing OpenAI’s paid model, GPT-4, in recent weeks. 

The company, through its X account, explains that training a chat model “is not a clean industrial process,” and that it’s “less like updating a website with a new feature and more an artisanal multi-person effort to plan, create, and evaluate a new chat model with new behavior!”


Information for this story was found via X, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Video Articles

This Copper Junior Just Secured $96 Million | Simon Quick – Canadian Copper

This Gold Stock Just Doubled — And It Still Looks Cheap | Q-Gold Resources PEA

Silver May Be the Trade of This Crisis | Michael Oliver

Recommended

Antimony Resources Drills 4.38% Sb Over 7.05 Metres At Bald Hill In Final Hole Of 2025 Program

Kirkland Lake Drills 121 Metres Of 1.01 g/t Gold At Mirado

Related News

Apple-OpenAI Deal Apparently Doesn’t Cost A Thing For Apple

In a move that has captured the tech industry’s attention, Apple (NASDAQ: AAPL) and OpenAI...

Friday, June 14, 2024, 12:59:00 PM

The Sam Altman Story | Founder of OpenAI ChatGPT

Good afternoon, good evening or good morning everyone, today we’re gonna spin a yarn about...

Monday, April 24, 2023, 01:30:00 PM

Verses Pens Open Letter To OpenAI For AI Collaboration

VERSES AI (NEO: VERS) is attempting to grab the attention of OpenAi in a big...
Tuesday, December 19, 2023, 10:39:21 AM

OpenAI Would Consider Chrome Purchase If Google Forced To Sell

OpenAI would pursue acquiring Google‘s Chrome browser if a court mandates its divestiture as part...

Wednesday, April 23, 2025, 02:54:00 PM

OpenAI in Talks for Massive Funding Round, Valuation Could Top $100 Billion

OpenAI, the company behind the popular AI chatbot ChatGPT, is reportedly in discussions to raise...

Friday, August 30, 2024, 02:54:00 PM