Will students welcome an AI tool that tracks their writing? Would you?

Plus, the 'delve' dilemma

Issue #41

On today’s quest:

— How to make AI output sounds like you
— A glorious AI podcast
— Fortune 500 companies see AI risks
— Grammarly Authorship tracks the source of text
— The “delve” dilemma
— AI can now write texts so long you won’t want to edit them
— When “tortured phrases” were actually helpful

How to make AI output sound like you

Christopher Penn has a fantastic new post describing in detail how to make AI output sound more like your own writing.* It’s not a speedy process, but if you want to use AI for a lot of writing, it seems as if it could be worth the effort.

The very short version is that you give it a huge amount of your own writing, ask it to analyze the style of that writing, have it output some content, grade that output, and iterate. But if you want to do it, you really need to read the whole post.

A glorious AI podcast

As a podcaster, I closely follow AI voice technology, and a new podcast about AI voices by Evan Ratliff called Shell Game is amazing. Evan creates voice clones of himself and sets them loose on the world — on strangers, scammers, friends, and loved ones — all with almost no oversight. It’s a revealing train wreck that will make you laugh, cringe, and wonder.

Fortune 500 companies see AI risks

Fifty-six percent of big companies mentioned AI risks in their most recent annual reports. The top industries expressing concern were:

92% Media and entertainment
86% Software and technology
70% Telecommunications
65% Healthcare

Only 31% of companies mentioned the benefits of AI in their filings. — Arize (PDF)

Grammarly Authorship tracks the source of text

Grammarly is rolling out a new writing product aimed at the education market.

Grammarly Authorship appears to be a word processor that tracks the source of input material, for example flagging text as being written in the window, pasted from another source, or generated by AI. The company says the tool will eliminate the problem of students’ work being incorrectly flagged by buggy AI detectors. — ZDnet

The ‘delve’ dilemma

Grammarly Authorship feels like a walking privacy violation to me, but maybe students (and professors) will welcome it given what they’re already dealing with. For example, the post above by Professor Laura K. Nelson on Bluesky reminded me of my fraught relationship with the word “delve.”

“Delve” is known to be a word that shows up in a lot of AI writing, and I have a writer who likes to use the word. I get stressed every time I see it in her work now! I edited it out of a previous piece because I don’t want people to think we’re writing with AI. But I left it alone in the last piece because I was feeling stubborn, and we shouldn’t have to abandon a perfectly good word because of AI.

AI that goes on and on

By training their LLM on longer texts “Researchers at Tsinghua University in Beijing have created a new artificial intelligence system that can produce coherent texts of more than 10,000 words.” They say, “Publishers might use AI to generate first drafts of books or reports. Marketing agencies could create in-depth white papers or case studies more efficiently. Education technology companies might develop AI tutors capable of producing comprehensive study materials.”

Having seen AI first drafts, my first thought was what a pain it would be to have to edit these long-form outputs. Venture Beat

When tortured phrases were … good?

Earlier computer programs that helped dishonest researchers publish plagiarized articles used word substitution that led to tortured phrases such as “bosom peril” for “breast cancer,” “counterfeit consciousness” for “artificial intelligence,” and “mean square blunder” for “mean square error.”

But these tortured phrases also helped journals detect the shoddy articles using tools such as the Problematic Paper Screener. Computer science professors writing in The Bulletin of the Atomic Scientists said, ”as AI technology improves, spotting these fakes will likely become harder, raising the risk that more fake science makes it into journals.”

Quick Hits

Authors sue Anthropic for copyright infringement. In a suit similar to those facing OpenAI, authors accuse Anthropic of “tapping into repositories of pirated writings to build its AI product [Claude].” — Associated Press

AI has a climate problem, but so does all of tech. How do you decide if AI is worth the energy? (I have been gathering information for months to write about AI and climate, but this Decoder podcast does an excellent job of summing up what is essentially still in my head and not in the newsletter.)

What is AI sidequest?

Using AI isn’t my main job, and it probably isn’t yours either. I’m Mignon Fogarty, and Grammar Girl is my main gig, but I haven’t seen a technology this transformative since the development of the internet, and I want to learn about it. I bet you do too.

So here we are! Sidequesting together.

If you like the newsletter, please share it with a friend.

* I’m interviewing Christopher this week for the Grammar Girl podcast, and I’m sure we’ll talk about this post. The episode is tentatively scheduled to come out October 17.

Written by a human.