Thursday, May 29, 2025

Latest

New Study Reminds Us That ChatGPT Does Not Really *Understand* What You Want It To Do

Amid chatter about ChatGPT’s reportedly degrading performance, a new study found that recent open-sourced large language models (LLMs), including OpenAI’s GPT-3, perform “surprisingly better” on datasets released before the training data creation date than on datasets released after. 

The University of California, Santa Cruz paper by Changmao Li and Jeffrey Flanigan suggests that it isn’t that ChatGPT’s performance is degrading because new tasks are different from what the models are trained on, but that we forget that these models, especially the groundbreaking GPT-3, performed astoundingly well because they were trained with massive amounts of data, with a vast amount of examples of what is asked of them, and not particularly because they understand the tasks per se. 

As writing teacher and AI in education specialist Anna Mills puts it, it’s like “it has studied advance copies of lots of tests,” however, “when you give it new tests (tasks with no examples in its training data), it performs worse.”

The paper emphasizes that LLMs use a retrieval-based approach that mimics intelligence, as tech entrepreneur Chomba Bupe points out.

OpenAI may be having trouble catching up. Claims of getting “lazy” (which many have equated to “degrading”) have been plaguing OpenAI’s paid model, GPT-4, in recent weeks. 

The company, through its X account, explains that training a chat model “is not a clean industrial process,” and that it’s “less like updating a website with a new feature and more an artisanal multi-person effort to plan, create, and evaluate a new chat model with new behavior!”


Information for this story was found via X, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Leave a Reply

Video Articles

How To Profit From $3300 Gold? An Inside Look At A Massive 300,000 Oz Project | Martino De Ciccio

We’re At The Start of a Great Silver Boom | Ross McElroy & Andy Bowering – Apollo Silver Corp

Equinox Gold Q1 Earnings: When Everything Goes Wrong

Recommended

Sterling Metals Hits 0.21% Copper Over 482.8 Metres In First Drilling At Soo Copper project

First Majestic Makes Second Major Discovery At Santa Elena In Just A Year

Related News

Annie Altman, OpenAI CEO’s Sister, Tweets About Sexual Abuse Suffered From “Brother” Sam Altman

Trigger warning: sexual abuse In a series of tweets, Annie Altman, the sister of OpenAI...

Friday, October 6, 2023, 03:03:00 PM

OpenAI Is Already Changing ChatGPT Responses Following NYT Lawsuit

OpenAI and Microsoft (NYSE: MSFT) may have already started cleaning up themselves after The New...

Tuesday, January 2, 2024, 04:30:00 PM

ChatGPT Has Brought Us Closer To Tech That Allows You To Talk To Dead Loved Ones In The Metaverse

Artur Sychov, the founder of top metaverse company Somnium Space, recently told Vice that advancements...

Thursday, February 9, 2023, 07:31:00 AM

Love ChatGPT? Buy Tesla Says Cathie Wood – Sees Stock Climbing To $1,500 In The Next 5 Years

Ark Invest chief Cathie Wood is putting a lot more faith in Tesla (Nasdaq: TSLA)...

Monday, February 13, 2023, 02:22:00 PM

OpenAI Nears Record Valuation of Up to $90 Billion with Possible Share Sale

OpenAI, the artificial intelligence startup responsible for ChatGPT, is currently in discussions with potential investors...

Wednesday, September 27, 2023, 01:42:00 PM