New Study Reminds Us That ChatGPT Does Not Really *Understand* What You Want It To Do

Amid chatter about ChatGPT’s reportedly degrading performance, a new study found that recent open-sourced large language models (LLMs), including OpenAI’s GPT-3, perform “surprisingly better” on datasets released before the training data creation date than on datasets released after. 

The University of California, Santa Cruz paper by Changmao Li and Jeffrey Flanigan suggests that it isn’t that ChatGPT’s performance is degrading because new tasks are different from what the models are trained on, but that we forget that these models, especially the groundbreaking GPT-3, performed astoundingly well because they were trained with massive amounts of data, with a vast amount of examples of what is asked of them, and not particularly because they understand the tasks per se. 

As writing teacher and AI in education specialist Anna Mills puts it, it’s like “it has studied advance copies of lots of tests,” however, “when you give it new tests (tasks with no examples in its training data), it performs worse.”

The paper emphasizes that LLMs use a retrieval-based approach that mimics intelligence, as tech entrepreneur Chomba Bupe points out.

OpenAI may be having trouble catching up. Claims of getting “lazy” (which many have equated to “degrading”) have been plaguing OpenAI’s paid model, GPT-4, in recent weeks. 

The company, through its X account, explains that training a chat model “is not a clean industrial process,” and that it’s “less like updating a website with a new feature and more an artisanal multi-person effort to plan, create, and evaluate a new chat model with new behavior!”


Information for this story was found via X, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Video Articles

2026 Could Be Gold’s Biggest Year Yet!? | Ryan King – Equinox Gold

Gold Is Screaming Higher While Currencies Burn | Simon Ridgway – Rackla Metals

We Have the Highest-Grade Antimony Deposit in North America!? | Jim Atkinson -Antimony Resources

Recommended

Canadian Copper Secures $8 Million Lead Order From Ocean Partners As Part Of Larger Funding Round

Northern Superior Expands Philibert With 350 Metre Step Out Testing 1.10 g/t Gold Over 25.5 Metres

Related News

ChatGPT Can Impact 80% of US Jobs — Is it After Yours?

A new study looked into the early impact of large language models (LLMs) such as...

Thursday, March 30, 2023, 05:13:00 PM

Was There A Resignation Spree From OpenAI Employees?

Ilya Sutskever, one of the co-founders of OpenAI and a leading figure in the artificial...

Friday, May 17, 2024, 03:47:00 PM

Professor Fails Entire Class Because He Doesn’t Understand “Chat GTP”

Seniors at Texas A&M University-Commerce who recently graduated are facing a temporary delay in receiving...

Thursday, May 18, 2023, 02:19:00 PM

Annie Altman, OpenAI CEO’s Sister, Tweets About Sexual Abuse Suffered From “Brother” Sam Altman

Trigger warning: sexual abuse In a series of tweets, Annie Altman, the sister of OpenAI...

Friday, October 6, 2023, 03:03:00 PM

Microsoft Confirms Multibillion Investment Into OpenAI

Microsoft Corp (NASDAQ: MSFT) has announced that it is making a significant multi-year investment in...

Monday, January 23, 2023, 03:07:00 PM