- AI Sidequest: How-To Tips and News
- Posts
- What does AI really cost?
What does AI really cost?
Tech CEOs testify before Congress, and I even talk to a real-life AI tech CEO
Issue #55: Trying to Figure Out the True Cost
On today’s quest:
— The cost of saying “thank you”
— Draftsmith 2.0 launches
— Tech CEOs testify
— What if AI is just a normal technology?
— An anti-cheating tip for teachers
The cost of saying ‘thank you’
Replying to a post on X, Sam Altman said people saying “please” and “thank you” to chatbots is costing him “tens of millions.”
I understood how sending a separate prompt to say “thank you” would have a cost, but I didn’t understand how saying “please” could matter until I talked with Daniel Heuman, CEO of Intelligent Editing, for an upcoming Grammar Girl podcast about AI, editing, and it’s new AI product, Draftsmith.
Daniel says they pay for each token they submit to (and receive from) ChatGPT. A token is about four characters, so submitting a longer prompt does cost more money, which suggests it uses more energy. Thus, even your “please” has a cost.
I’ve definitely felt the pull to be polite with chatbots before, and I’ve pondered why. I know the bots don’t have feelings, but it somehow violates my sense of decency to be impolite. I don’t view chatbots as sentient or as friends, but I suppose I’m no more immune to their human-like communication patterns than anyone else.
A Futurism story about Sam’s comment had an amusing tidbit: In a U.S. survey about why people are polite to chatbots, 12% said it was to appease the algorithm in case of an AI uprising, so I’m sure those people won’t cut back on their “please”s and “thank you”s. I imagine a couple of tokens doesn’t seem like a large price to pay for appeasing future AI overlords.
Intelligent Editing launches Draftsmith 2.0
Speaking of Draftsmith, version 2 launches today. I’ve watched the company’s progress with interest, and Daniel says the switch from ChatGPT 3.5 in the first version to ChatGPT 4o Mini in the new version has led to better accuracy.

A screenshot of Draftsmith showing buttons that read “More Empathetic,” “More Friendly,” and “Simplify,” with the suggestions shown below with track changes turned on. Source: A screenshot of the Draftsmith launch blog post.
The product works within Microsoft Word, and gives you buttons that prompt the AI behind the scenes to make different kinds of changes to the text, which appear with tracked changes enabled so the user can choose whether to accept or reject each suggestion.
If you’d like to learn more about the product, here’s the launch blog post.
More on the climate
I was excited to talk to Daniel about Draftsmith not only because I think the product is interesting, but also because I thought he might be able to help me figure out whether the environmental cost of AI is catastrophic or no big deal.
As I’ve said in past newsletters, I just can’t seem to pin it down. I’ve read seemingly credible reports that say 50,000 ChatGPT queries emit far less CO2 than a single transatlantic flight and that eating one hamburger uses more water than 300 queries, but from their own mouths, AI company executives say energy is a problem. They talk about needing fusion energy to work for their business to succeed and reactivating nuclear energy plants.
When some big tech CEOs recently appeared before the House Energy and Commerce committee, their tone was dire. I caught some of it live, and they sounded like they were BEGGING for more energy infrastructure. A direct quote from Eric Schmidt is that “we need the energy and the numbers are profound,” and Will Oremus of the Washington Post wrote about the hearing and described CEOs as saying the AI industry needs “almost unfathomable amounts of energy.”
What if AI is just a ‘normal’ technology?
A new report from the Knight First Amendment Institute offers a good counter argument to the dystopian AI 2027 report that I linked to in the last newsletter.* Knight suggests that AI is just a normal technology that will be adopted at a normal pace.
One line jumped out at me (emphasis added): Studies generally do show that professionals in many occupations benefit from existing AI systems, but this benefit is typically modest and is more about augmentation than substitution ... a small number of occupations such as copywriters and translators have seen substantial job losses.
An anti-cheating tip for teachers
Never forget that AI checkers don’t work and unfairly flag students who are minorities or speakers of English as a second language, but I did see what seems like a straightforward little trick to catch students who paste assignments into chatbots.
Quick Hits
Move over search engine optimization (SEO), now we have to think about generative engine optimization (GEO) — Search Engine Land
The National Academy of Sciences publishes a discussion on the use of AI in the sciences. Although the panelists had many concerns, one said, “I’m not sure I know of any scientist right now who is not engaging with AI.” — JAMA Network
An AI-generated news story quotes a juror on an active trial, causing a stir. But it turns out the juror wasn’t a real person; the name is that of a fictional Swedish police inspector. — Joshua J. Friedman on Bluesky
What is AI Sidequest?
Using AI isn’t my main job, and it probably isn’t yours either. I’m Mignon Fogarty, and Grammar Girl is my main gig, but I haven’t seen a technology this transformative since the development of the internet, and I want to learn about it. I bet you do too.
So here we are! Sidequesting together.
If you like the newsletter, please share it with a friend.
Written by a human.
* This report was very long — about as long as the AI 2027 report — and I confess that I didn’t make it to the end, which highlights an interesting problem: Reports about how everything is going to be OK are almost by definition more boring than reports about the end of humanity. I don’t have the expertise to say which report is more credible, but it’s something to keep in mind as you hear dramatic predictions, whether they are good or bad.