- AI Sidequest: How-To Tips and News
- Posts
- What you need to know about GPT-5
What you need to know about GPT-5
The reviews are in!
Issue 75

WHOA! I really messed up the font in that last version, huh? Here’s an actual readable version.
On today’s quest:
— The much-awaited GPT-5 is now live
— China is an AI powerhouse
— Could the Anthropic copyright case ‘ruin’ the AI industry?
— Learn about AI
The much-awaited GPT-5 is now live
OpenAI’s GPT-5 went live on Thursday, and after a rocky rollout is now available to all users, but changes have continued to roll out over the weekend to address user complaints. Here’s the run-down:
Model Switching
One of the biggest changes is that you no longer choose from a long list of models. Under the hood, GPT-5 actually comes in four strengths — high, medium, low, and minimal reasoning, but only some users get to choose:
Free users are automatically shunted to whichever version the system decides is best for their question, but these users now have access to a reasoning model for the first time (when GPT-5 decides they need it).
$20 per month Plus users are also automatically directed to the “correct” model, but they also have the option of choosing the “thinking” reasoning model (which presumably then decides whether they get high, medium, or low thinking). Users on the Plus plan are limited to 3,000 prompts per week in GPT-5 thinking mode (an increase from 200, announced on Sunday).
$200 per month Pro users can choose any of the four GPT-5 versions and have no limits.
Testers say you can push GPT-5 to use the thinking model in the free plan and higher levels of reasoning in the Plus plan by adding a prompt phrase such as “think hard about this task.”
As a side note, at first, all the older models (such as o3, 4o, and 4.5) disappeared, but after loud outcry about the loss of GPT-4o, which had been the model driving the free plan, OpenAI relented and brought it back for paid users, but you have to enable “legacy models” in the settings to see it.
But how good is it?
The first thing you’ll probably notice is that GPT-5 is fast. Whether it’s good seems to depend on what kind of user you are:
Free users who use ChatGPT for work-like tasks are likely to think GPT-5 is much better than before because they now have access to a reasoning model.
Beginning vibe coders are likely to be happy because GPT-5’s ability to create working apps from a plain-language prompt has dramatically improved. Mashable has a good review.
Users who relied on the personality of old models for things like creative writing or companionship are likely to be unhappy because the personality is different and the sycophancy has been dialed down. The ChatGPT subreddit is still filled with users asking the company to bring back 4o for free users because they preferred its personality.
Sophisticated consumer users are likely to see some good and some bad. They will likely notice the increased accuracy and reasoning power, but will likely miss the ability to switch to different models for different tasks. If they had taken the time to develop workflows that were specific to certain models or if they had built custom GPTs, those may not work the same way anymore.
Developers still have access to all the older models through the API, so they are likely happy to have another new option that performs better in some ways (e.g., less hallucination).
It’s also worth noting that while GPT-5 with thinking performs especially well in benchmark testing, GPT-5 without thinking is only so-so.
After complaints about not being able to see which model your chat is using, Sam Altman announced the company will soon be rolling out a new user interface with model transparency.
Accuracy & Safety
I mentioned it above, but it’s worth talking a bit more about hallucinations. OpenAI says it put significant effort into reducing the hallucination rate and specifically called that out again when they said they focused on improving GPT-5 for one of the biggest use cases — health queries.
The image below is from the OpenAI GPT-5 launch video, and Mashable has a good breakdown on hallucination rates. You can think of the general hallucination rates as being 11.6% for GPT-5 without thinking and 4.8% for GPT-5 with thinking.

Note that the hallucination rates above are for prompts with web searching enabled. Without web searching, hallucination rates can still approach 50%. ChatGPT will use web search when it thinks your question needs it, such as when you ask for current information like the weather or sports scores, but you can also enable it at other times by clicking “web search” under the + menu.

Deception is actually broken out separately from hallucination — an example of deception is when the model says it has completed a task it hasn’t actually done — and the company says it has now tuned the model to admit when it can’t do something or can’t find an answer. (However, I have to note that it was still deceptive at one point during my tests.)
The company has also made other changes to address safety:
They’ve reduced the likelihood that GPT-5 will answer dangerous questions, like how to use explosives.
They’re now sending messages urging users to take breaks if they’ve been chatting for a long time (something that has been associated with AI delusions).
They’ve reduced the sycophancy even more. I see the difference and like it. GPT-5 no longer annoyingly praises every question I ask.
You do have some choices when it comes to personality though. The different personalities are available in the Customize ChatGPT settings, and I’m curious whether some of them will end up allowing more sycophancy:

I tested the Listener and Robot personalities with a prompt about being overwhelmed by too much work, and the difference was dramatic — much more so than I expected.
Listener was touchy-feely. I got four empathetic paragraphs with lines like “Sounds like your brain is trying to hold excitement and exhaustion in the same fist. That’s a tricky grip to keep.”
Robot was efficient and blunt as could be. I got two lines telling me to prioritize. 🤣
Reviews & Demos
Power users who had early access have published their reviews. Ethan Mollick and Simon Willison both thought it was great, although not earth-shatteringly better.
Christopher Penn ran his standard prompt tests, and GPT-5 failed about half of them.
You can watch the OpenAI launch videos here:
I’ll share my own tests in an email later this week.
Ad: Draftsmith
People make the best editing decisions.
Draftsmith just helps you make them faster.
Save time on challenging edits
Stay focused longer and maintain your high standards even under pressure
You’re in control. Get sentence-level suggestions — and choose whether to accept, modify, or ignore, tracking every change
“Draftsmith represents the most productive step toward effective AI-assisted editing yet.”
“Draftsmith respects the way professional editors work.”
Exclusive offer for AI Sidequest readers:
Get 30% off with code SIDEQUEST30.
Apply at checkout.
China is an energy and AI powerhouse
Many of the AI critics I see don’t seem to consider the global nature of AI. For example, they think the copyright cases in the U.S. can make AI go away. But China isn’t bound by our copyright laws, and world-class Chinese models seem to be coming out every day — Chinese companies have released 1,509 new AI models this year alone, more than anyone else in the world — and until the release of GPT-oss last week, all the top open-weight models were from China.
In a recent Odd Lots podcast, supply chain expert Cameron Johnson highlighted some of the other reasons we shouldn’t ignore Chinese AI development:
Electricity. AI data centers need huge amounts of energy, and China is focused on expanding its energy infrastructure. It’s far ahead of the U.S.
New Power Brought Online in 2024
Solar & Wind Power | Other Power | |
---|---|---|
China | 356 GW | 95 GW |
U.S. | 48 GW | 3 GW |
SOURCE: Financial Times, The Guardian (Note: The 95 GW of “other power” in China is coal-fired plants on which construction began.)
Further, China has about 50 graduate programs for battery chemistry and metallurgy, but the U.S. only has a few individual professors who work in the area — giving China a big advantage in the future of battery technology.
Education & Enthusiasm. In 2024, 72% of Chinese adults said they trusted AI compared to just 32% of U.S. adults, and China is not shying away from educating its students on AI technology.
Johnson said, “When you look at the talent, the government support, it's not just, ‘Hey guys, we want to use AI.’ It's every kid in the country now, from kindergarten all the way on up. They are mandated, starting this fall, to have AI education, the entire apparatus in the whole country.”
He added that there are significant government initiatives to get students into internships and programs studying AI, quantum computing, robotics, semiconductors, and batteries.
Business Integration. Johnson also said that during the manufacturing slowdown caused by recent tariffs, all the businesses he works with have been using their downtime to integrate DeepSeek up and down their supply chains.
Practical Focus. Johnson reflected on the different philosophies of U.S. and Chinese AI companies. He conceded that the U.S. leads in breakthroughs, but describes it as “the metaphysical versus the tangible.”
Alluding to the U.S. focus on reaching artificial general intelligence, he said, “We're going to build the digital god first [but] the technology and the adaptation and the integration is all going to be Chinese.”
Listen to Cameron Johnson on the July 28, 2025, Odd Lots podcast.
Could the Anthropic copyright case ‘ruin’ the AI industry?
Notwithstanding my comments above, Anthropic is arguing that the class action lawsuit against it for copyright infringement threatens to “financially ruin” the entire AI industry (at least in the U.S.), noting that it "faces hundreds of billions of dollars in potential damages,” and because it can’t afford to take the risk of losing, it would be incentivized to settle, which would lead to rules that affect the whole industry that were never litigated in court.
Surprisingly, according to Ars Technica, many groups representing authors have sided with Anthropic, including Authors Alliance, the Electronic Frontier Foundation, American Library Association, Association of Research Libraries, and Public Knowledge, essentially saying that the case is too broad, and the class includes many different kinds of authors who may have different desires or competing interests, and no good system has been set up to notify authors if they are included in the class.
Learn about AI
Everyday Editing: Real-World AI Strategies for Work, with Erin Servais, Marcella Weiner, Kristen Tate, and Crystal Wood. August 26, 10 a.m. Pacific. Free.
Quick Hits
Psychology
An online trove of archived conversations shows model sending users down a rabbit hole of theories about physics, aliens and the apocalypse — Wall Street Journal
Chatbot Conversations Never End. That’s a Problem for Autistic People. — Wall Street Journal
Climate
How more efficient data centres could unlock the AI boom — Financial Times
Tucson City Council rejects Project Blue data center amid intense community pressure — Arizona Luminaria
As electric bills rise, evidence mounts that data centers share blame. States feel pressure to act — Associated Press
Bad stuff
New study sheds light on ChatGPT’s alarming interactions with teens — Associated Press
Jobs
AI is coming for (some) finance jobs — Financial Times
As companies like Amazon and Microsoft lay off workers and embrace A.I. coding tools, computer science graduates say they’re struggling to land tech jobs. — New York Times
Science & Medicine
With AI, researchers predict the location of virtually any protein within a human cellJob market — MIT News
Education
Other
Google says it's working on a fix for Gemini's self-loathing 'I am a failure' comments — Business Insider
How AI Conquered the US Economy: A Visual FAQ — Derek Thompson
ChatGPT will apologize for anything — AI Weirdness
Paywalled content is no match for the new generation of AI browsers, which instantly give users all the information in a paywalled story for free. — Machine Society
What is AI Sidequest?
Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!
I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too.
If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedIn — Facebook — Mastodon]
Written by a human