- AI Sidequest: How-To Tips and News
- Posts
- Are we starting to *talk* like ChatGPT?
Are we starting to *talk* like ChatGPT?
Plus, a surprisingly easy new way to fact-check with AI
Issue 62
On today’s quest:
— Tip: fact-checking with ChatGPT
— We’re starting to talk like ChatGPT
— ChatGPT is not hurting your memory
— Should you use Deepseek to be energy efficient?
— Training: Using AI for writing business efficiency
Tip: Fact-checking with ChatGPT
Mike Caulfield has developed a public custom GPT that seems to do high-quality fact checking. I haven’t used it extensively, but I was impressed with the results from running one language article through it and from these two prompts:
Is it true that COVID has caused an increase in the number of people being diagnosed with type 2 diabetes?
Do Grok's data centers pollute more than other data centers?
We’re starting to talk like ChatGPT
You may remember that certain words show up more in research papers written with the help of LLMs like ChatGPT: “delve,” “underscore,” and so on.
Well now, researchers who studied the transcripts of almost 280,000 YouTube videos found that academics are starting to talk like ChatGPT too.
The researchers conclude that humans are internalizing and reproducing linguistic patterns introduced by AI models.
Interestingly, not all the top ChatGPT-associated words are increasing in spoken language. “Prowess” has increased the most — about 40% — while the use of “ChatGPT words” such as “unearth” and “endeavor” actually fell.
I was so fascinated by this study that I dropped everything and wrote a segment about it for Tuesday’s Grammar Girl podcast, so if you want to hear more, subscribe wherever you get your podcasts (Apple Podcasts, Spotify, other).
That ‘cognitive debt’ article you’ve been seeing? It’s mostly about student essays

I have been seeing the Time article about ChatGPT affecting people’s brains at least 30 times a day since it came out. Anti-AI people are waving it around as proof that nobody should use AI, and I’m thoroughly annoyed by all the people who either haven’t read the actual study or are willfully misrepresenting the results.
The study seems relevant to education, but it does not prove LLMs are harming anyone.
Here’s the deal: In a small study of just 54 people (with only 18 remaining by the end), the researchers found that when people used ChatGPT to write SAT essays, they exhibited less brain activity on an EEG than when they wrote the essays themselves, with or without access to a search engine. Which seems like a blindingly obvious outcome to me. Writing takes more cognitive work than cutting and pasting.
Students shouldn’t use LLMs to write their essays
Again, obviously, students using ChatGPT to write their essays is bad because the reason they’re doing the writing is to learn to write and think critically — to stretch and exercise their brains. But the situation is different for people in the working world.
But working people delegate all the time
I love to write, but I have other responsibilities too, and sometimes I hire guest writers to write segments for the podcast. I do this precisely because it taxes my brain less than if I were to write the pieces myself, and it frees me up to do things like read a whole book before I interview someone.
I’ve had writers of different skill levels over the years, and editing the less skilled writers takes more of my mental energy than editing the better writers. I prefer to work with the better writers who make my job as easy as possible so I’m spending less of my limited cognitive energy on the task. That’s the whole point of delegating.
I don’t use ChatGPT this way, but if I did, it would likely be cognitively equivalent to hiring a writer.
ChatGPT is not damaging anyone’s memory
Another outcome from the study was that when the subjects were asked to quote a sentence from the essay later, people who wrote their own essays were able to quote a sentence far more often than people who used ChatGPT. Again, I wouldn’t expect anything different. It’s not damaging anyone’s memory.
Also, in the second session, when participants were expecting to have to quote from their essays, the ChatGPT group performed better — still not quite as good as the other groups, but most of them could remember a quote when they had a reason to.
And two final points:
The study’s author reported putting AI “traps” into the paper so that LLMs would not summarize the paper properly, which seems odd to me, and suggests an anti-AI bias. I would think a scientist would want to ensure that their paper was properly interpreted, not sabotage it.
I’m not qualified to evaluate this, but I’ve seen people question whether the EEG readings were measuring meaningful outcomes. For example, one post said your brain will light up more while doing a Google search than while reading a book, but that doesn’t mean we learn better from Google searches than from reading books.
Some other good posts about the study:
Is this your brain on ChatGPT? — Sean Geodecke
The big question isn’t how to put AI in schools but how to redesign instruction to support the reality of AI — Education Disrupted
Ad: I Hate It Here
I chose this one because the name and ad are so funny. :)
HR is lonely. It doesn’t have to be.
The best HR advice comes from people who’ve been in the trenches.
That’s what this newsletter delivers.
I Hate it Here is your insider’s guide to surviving and thriving in HR, from someone who’s been there. It’s not about theory or buzzwords — it’s about practical, real-world advice for navigating everything from tricky managers to messy policies.
Every newsletter is written by Hebba Youssef — a Chief People Officer who’s seen it all and is here to share what actually works (and what doesn’t). We’re talking real talk, real strategies, and real support — all with a side of humor to keep you sane.
Because HR shouldn’t feel like a thankless job. And you shouldn’t feel alone in it.
Should you use Deepseek to be energy efficient?
A reader named David had heard that Deepseek was an especially efficient model (as had I) and asked if using Deepseek would be a way to responsibly use AI, and a new study of open-source models has supplied an answer: an emphatic NO!
Deepseek was the worst:
Model | Wh per Prompt | Notes |
---|---|---|
Deepseek-r1 70B | 2820 | |
ChatGPT (typical) | 3 | This is the number you see most often and what I used in previous newsletters. |
Cogito 8B | 2.5 | |
ChatGPT (Sam’s number) | 0.34 | This is the number Sam Altman recently announced. |
Other studies exist.1
Why did we hear that Deepseek was so energy efficient?
Energy use gets measured at two main points:
Training: what it takes to initially build the model
Inference: what it takes to process and reply to a prompt
The hype around Deepseek and energy use was about training: it was much cheaper to train than previous models.
The developers also claimed at the time that inference would use less energy, but they provided only a small number of cherry-picked examples. It seems that real life is turning out to be much different.
Choose a non-reasoning model
An important highlight of the study is that all the reasoning models used far more energy than the non-reasoning models, so if you’re trying to make energy-efficient choices, use reasoning models only when you can’t do your work without them.
Ask for concise replies
Another factor that plays into how much energy a model uses is how long the average response is, and after using Deepseek, the results make sense to me because it goes on and on (and on!). It once took eight paragraphs of reasoning to reply when I accidentally typed just “Thank you.”2 (However, the paper also found that Deepseek was the most accurate of the open source models.)
If you are trying to save energy while still using LLMs, one strategy is to ask for short or brief replies.
The energy powering the data center matters too
A prompt processed at a data center that uses clean energy will be less bad for the environment than one processed at a center that uses dirty fuel. Grok, in particular, has been tied to high levels of pollution near at least one of its data centers and is being sued by the NAACP over pollution emanating from it’s South Memphis center.
And what about Sam Altman’s shockingly good new number?
Sam Altman’s recent proclamation that ChatGPT uses only 0.34 Wh per prompt was met with a lot of well-deserved skepticism because Altman provided no details and has a lot to gain from minimizing ChatGPT’s energy use.
A post from Towards Data Science looks at what else is known about ChatGPT and OpenAI that might support that number, but since Altman didn’t even say what model the number is for or how long the average prompts and responses are that the number is based on, it’s not as meaningful as the numbers from the new open-source study.
Training: Using AI for Writing Business Efficiency
Jane Friedman is teaching an AI webinar on June 26 from 1:00 to 2:00 Eastern time. Cost: $25.
According to Jane, she will demystify “the current AI landscape and show through examples and live demos how to incorporate AI tools into your workflow so you reclaim creative time and reduce time sinks.”
You can listen to Jane and I talk about AI in our conversation back in May. The video will start at that part of the interview:
Quick Hits
Education
The Death of the Student Essay—and the Future of Cognition — The Garden of Forking Paths
The big question isn’t how to put AI in schools but how to redesign instruction to support the reality of AI — Education Disrupted
AI in the workplace
Northwestern Medicine radiology increases productivity 40% without any loss of accuracy with a custom-built AI system — Northwestern Medicine
Researchers use AI to evaluate routine chest CT scans for signs of heart disease. (Word Watch: This is sometimes called “opportunistic screening.”) — Earth.com
I’m scared
Most models tested would kill a human blocking its aims (under very specific circumstances) — Simon Willison’s Weblog
10 ways AI is being used in TV production (most are horrifying) — Omri Marcus on LinkedIn
I’m laughing
Tips & how-tos
Tips for prompting reasoning and deep research models — AI for Education
What is AI Sidequest?
Using AI isn’t my main job, and it probably isn’t yours either. I’m Mignon Fogarty, and Grammar Girl is my main gig, but I haven’t seen a technology this transformative since the development of the internet, and I want to learn about it. I bet you do too.
So here we are! Sidequesting together.
If you like the newsletter, please share it with a friend.
An earlier study found that Llama-3.1 70B used only 0.14 Wh per prompt, which doesn’t match the newer study. Maybe I’m doing the math wrong, or maybe they’re measuring it in dramatically different ways. I find the whole thing frustrating, and here’s another good post about why it’s complicated. But the bottom line is that I definitely believe Deepseek is a bad choice if you’re only looking at energy use. (As you may remember, I’m running it off my solar roof, so I don’t worry much about the energy consumption, but I still plan to try other models just because the long responses are annoying.)
My understanding is that reasoning models don’t do well in general when you give them questions that are too easy or straightforward — they are truly for reasoning and not simple Q & A.
Written by a human