Sunday, January 18, 2026

Latest

Stanford Student Finds That Academics Are Abusing ChatGPT

Software developer and student at Stanford University Andrew Gao has spotted something peculiar about recently published studies online: many of them forgot to acknowledge a co-author, OpenAI’s ChatGPT.

Gao, who specializes in artificial intelligence and large language models, took to X, the platform everyone prefers to still call Twitter, to talk about what he found after doing a search for the phrase “As an AI language model” on Google Scholar.

The phrase is how the large language model ChatGPT begins its responses as a blanket disclaimer whenever it’s asked to take on the role of a human – like when prompted to have an opinion on something subjective about something, discuss a theory, or whenever it’s prompted to generate restricted content.

While the phrase has proliferated a good portion of the web, the results that Gao pulled up from Google Scholar — the search engine for scholarly/academic content — was astounding. There were papers upon papers, peer-reviewed journals, book analyses, and more

One would think academics, of all people, would be careful enough to check the output and not have to copy-paste the entire ChatGPT response, including the now-ubiquitous disclaimer phrase. 

According to Gao, and many would agree, using ChatGPT isn’t exactly wrong as it can be useful, but — it needs to be properly acknowledged. This also highlights the fact that AI-generated content is not yet completely reliable as large language models have the tendency to make things up, or “hallucinate.”

In June, a New York judge fined lawyers for using ChatGPT to create a legal brief. Attorneys Peter LoDuca and Steven Schwartz, and their law firm Levidow, Levidow & Oberman were slapped with a $5,000 sanction. They AI-generated document, which was was supposed to be for an otherwise unremarkable torte case, contained made up cases and citations.

Gao’s findings, while funny at first, also brings to light that people will take what ChatGPT gives them as is, without even giving it a once-over to make sure no glaring phrases like “as an AI language model” make it to their final work.  


Information for this story was found via the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Video Articles

Why Silver Needs to Slow Down to Go Higher | Dan Dickson – Endeavour Silver

Silver Dips Are Getting Bought, This Is How Breakouts Start | John Feneck

Why $100 Silver Right Now Would Be a Problem | Keith Neumeyer – First Majestic

Recommended

NexGen Launches 42,000 Metre Drill Program At PCE While Expanding Mineralized Footprint

First Majestic Hits 2025 Guidance, Producing 31.1 Million Silver Equivalent Ounces, Increases Dividend

Related News

OpenAI Reportedly Targeting $1 Trillion IPO By 2026

OpenAI is laying groundwork to file for an initial public offering valued at up to...

Friday, October 31, 2025, 12:56:00 PM

OpenAI’s GPT-4o-Powered ChatGPT Is Now More (Terrifyingly) Conversational and Life-Like

OpenAI on Monday announced GPT-4o, a new flagship generative AI model that expands on the...

Tuesday, May 14, 2024, 05:32:00 PM

SoftBank Group Eyes Larger Stake In OpenAI With $1.5 Billion Tender Offer

SoftBank Group is poised to increase its investment in OpenAI through a $1.5 billion tender...

Wednesday, November 27, 2024, 04:01:00 PM

New York Times v. Microsoft, OpenAI: The Biggest Argument to Copyright Issues of Generative AI

The New York Times has filed a federal lawsuit against OpenAI and Microsoft, accusing them...

Thursday, December 28, 2023, 11:19:00 AM

Elon Musk Isn’t Happy About Apple Partnering with OpenAI, Says It’s ‘Creepy’

Elon Musk has threatened to ban Apple (Nasdaq: AAPL) devices at his companies if the...

Tuesday, June 11, 2024, 07:55:59 AM