- AI Sidequest: How-To Tips and News
- Posts
- What people who can detect AI actually see?
What people who can detect AI actually see?
Plus, how do automated AI detectors work?
Issue 64
On today’s quest:
— How do AI detectors work?
— Can you actually detect AI writing?
— Calling all Emilys and Sarahs!
— Chatbots are crushing website traffic
— Word watch: fauxtomation
— Word watch: robotheism
— More on that ‘brain rot’ study
— More on the Anthropic copyright lawsuit
How do AI detectors work?
Last week for the Grammar Girl podcast, I wrote about whether em dashes are a sign of AI writing, as well as about other ways of “detecting” AI writing. (I put that in quotation marks because the commercial AI detectors aren’t very good.) And here’s some interesting information that seemed too technical for the podcast but fits better here:
You may have heard that AI detectors falsely accuse non-native English speakers far more often than native English speakers. A Stanford study found a false positive rate of 61% in this group, for example.
This happens because detectors make certain assumptions about what signals AI writing:
The structure of AI writing is more predictable than human writing, which is more variable and erratic.
The words used in AI writing are less varied than human writing, which uses a higher variety of words.
You can easily see how someone who didn’t grow up speaking English might write sentences with less variable structures and might have fewer words they’re comfortable using.
The detectors sometimes also look for areas of writing that diverge from the style of the rest of the writing to identify sections that may have been generated by AI, which is less likely to cause discrimination problems.
If you’d like to listen to the em-dash segment, you can find it here:
Can people actually detect AI writing?
Most automated AI detectors don’t work, and most humans also do no better than chance at detecting AI writing, but a small number of people who use LLMs daily for writing tasks were able to accurately detect AI writing from GTP-4o in a recent study. Researchers at the University of Maryland, found that the majority opinion of a panel of five heavy AI users was nearly perfect in identifying AI writing, misclassifying only 1 in 300 articles.
Signs they looked for included:
Specific words known to be used by AI. (Nonexperts tended to mistakenly think any “fancy” word was a sign of AI writing.)
Consistently grammatically-correct sentences. (Nonexperts tended to mistakenly think humans would produce the more grammatically-correct sentences.)
Optimistically vague conclusions. (Nonexperts tended to mistakenly think that any neutral writing was AI generated.)
Individual experts used a variety of other cues (see below).
Nonexperts overestimated their ability to detect AI writing, and the experts achieved their near perfect outcomes by working as a group. No single expert’s determination carried the whole weight; the scoring required 3 out of 5 people on an expert panel to agree.
Calling all Emilys and Sarahs!
This was the most mind-blowing detail to me from the University of Maryland study:
The assessment panels worked with paired articles on the same topics: the researchers started with human-written articles and then had AI generate articles on the same topic and of the same length — so the AI had to make up names for people in the stories, and it used the same names ... a lot:
“63.3% of GPT-4o and 70% of Claude-3.5 Sonnet articles include either the name Emily or Sarah."
One expert on the panel that was so successful in identifying AI writing used the presence of these names as one of their tells.
Although LLMs are known to have a bias toward Western culture and seem likely to favor names that appear more often in their training data, it’s not clear whether Emily and Sarah tend to appear in all AI writing or if this was a specific quirk in just this study.1
Chatbots are crushing website traffic
Both Barron’s and the Wall Street Journal recently had articles about website traffic being crushed as more and more people get AI answers to their searches and don’t feel compelled to click through to the source. These are some of the search traffic drops at major websites:
Schwab.com: 14% last month
TripAdvisor: 34% last month
Starbucks: 40% last month
Netflix: 23% last month
U.S. travel and tourism sites: 20% year over year
E-commerce companies: 9% year over year
News and media sites: 17% year over year
Dotdash: Traffic from Google cut in half since 2021.
HuffPost: Search traffic fell by over half in the last three years
Washington Post: Search traffic fell by almost half in the last three years
The Atlantic: A top exec warned staff that traffic from Google will drop to zero
Business Insider: Just laid off 21% of staff citing traffic drops
Some people blame AI for these drops, and the data seems to back that up: Barron’s reports that in March, “searches with AI Overviews resulted in a click 23% of the time. For searches without the overviews, the click rate was 36%.
Referrals from chatbots like ChatGPT and Perplexity are growing, but currently replace only about 10% of the traffic lost from search.
Ad: AI with Allie
Since I’m a LinkedIn Learning instructor myself, I’ve seen how well they vet their courses, and I was impressed to see that Allie Miller is also a LinkedIn Learning instructor.
Expand What AI Can Do For You
Tired of basic AI prompts that don't deliver? This free 5-day course shows you how to create tools that actually address your problems—from smart assistants to custom software.
Each day brings practical techniques straight to your inbox. No coding, no fluff. Just useful examples to automate and enhance your workflow.
Word watch: fauxtomation
“Fauxtomation” was coined by Astra Taylor in Logic(s) Magazine in 2018 to describe the way that automation just shifts work — for example, by having customers enter their order into a McDonald’s kiosk rather than having the employee enter the order.
But “fauxtomation” has taken on an AI-related meaning recently too. According to a 2024 Scientific American article, it is now “when human labor is hidden under the veneer of a robot or AI tool.” And in fact, Astra Taylor posted to Bluesky about how the term applies to the recent Scale AI acquisition.
Word watch: robotheism
Taylor Lorenz’s Power User podcast had the astonishing deep dive and history on people believing AI is god that I didn’t know I need. It will blow your mind and give you a new perspective on people who are forming relationships with AI.
Through the show, I also learned the word “robotheism”: the belief that artificial intelligence is god. The first entry in Urban Dictionary is from 2023, and the term doesn’t appear at all in a Google Ngram search.
More on that ‘brain rot’ study

The hype around that “ChatGPT is destroying your brain” study I complained about last week has gotten so out of control that the study’s author published a new page clarifying that the study was preliminary, had limited participants, was focused only on writing an essay in an academic setting, and had other limitations.
The post also asked journalists to stop using the following words to describe the study: “brain rot,” “brain damage,” “stupid,” “harm,” “damage,” “terrifying findings,” and so on. The heading asked, “Is it safe to say that LLMs are, in essence, making us ‘dumber,’” and the answer was “No!”
I’ve never seen anything quite like this statement, and I applaud the researcher for trying to set the record straight.
More on the Anthropic copyright lawsuit
I covered the basics in a breaking news alert a couple of days ago: Anthropic won part of a copyright infringement lawsuit being brought against it by authors in what many people are calling a significant win for AI companies.
Since then, some points have become more clear. For example, the activity the judge said was not in violation of copyright was when Anthropic bought used books, broke them down, scanned them, and used them for training.
On the other hand, the judge said that the second Anthropic copied pirated ebooks from online databases, it was in violation of copyright, regardless of what it did with them afterward. As a 404 Media story summed up, it’s not great for authors because “it suggests that training an AI on legally purchased works is sufficiently transformative, but that pirating those works in the first place is not.”
Anthropic will face damages in the pirated ebooks portion of the case in December, and the amount could reach into the billions, but I expect the company will appeal.
Learn More
— Authors Alliance
— Simon Wilson’s Web blog
— The Fashion Law (lists the actual cases cited in the suit)
— The AI Inside podcast (I especially enjoyed Jeff Jarvis’ comments)
Also: Judge rules Meta’s use of copyrighted materials was fair use, but says authors didn’t make the right arguments (Wired)
Quick hits
Philosophy
We are still underreacting on AI — Pete Buttigieg
We used to consider writing an indication of time and effort spent on a task. That isn't true anymore. — One Useful Thing
AI can be bad overall without being useless — The Weird Turn Pro
A different kind of take from someone who was clearly as annoyed with that ‘cognitive debt’ news cycle as I was — Hybrid Horizons
Using AI
Writing in the age of LLMs (more on detecting AI and how one author uses AI)— Shreya Shankar
34% of U.S. adults have used ChatGPT, about double the share in 2023 — Pew Research Center
Legal
Creative Commons to develop licenses to signal AI training preferences and rights — Creative Commons
Linguistics
Other
China-based MiniMax releases a new open source reasoning model that cost 10x less to train than Deepseek — ComputerWorld
How ChatGPT and other AI tools are changing the teaching profession — Associated Press
What is AI Sidequest?
Using AI isn’t my main job, and it probably isn’t yours either. I’m Mignon Fogarty, and Grammar Girl is my main gig, but I haven’t seen a technology this transformative since the development of the internet, and I want to learn about it. I bet you do too.
So here we are! Sidequesting together.
Because this newsletter is essentially still just a hobby, I don’t promote it much as I should, and the biggest way the audience grows is when readers like you share it with a friend. So if you like the newsletter, please forward it to someone else you think would like it too.
Now that we know about it, we’ll probably notice it everywhere.
I find it unfortunate that a master’s in law is an “LL.M” degree. When I got to the end of this highly positive AI story and saw the author’s name followed by “LL.M,” I wasn’t sure if it was written by a real person or if the name was a cheeky AI-writing disclosure. After clicking around, I do believe it’s a real person.
Written by a human