- AI Sidequest: How-To Tips and News
- Posts
- A massive AI fail (you can use as an example)
A massive AI fail (you can use as an example)
OMG, please don't do this!!!
Welcome to Issue #8 of AI Sidequest. (I counted!) On today’s quest:
An AI debacle that makes a great cautionary tale.
A tip about formatting citations.
News about another debacle, Meta’s cool new image editing demonstration, and insights from the World Economic Forum.
If you’re a new subscriber, welcome! You can find all the old issues on the Beehiiv page.
How not to use AI
I’ve been swearing I’m going to limit myself to one newsletter a week, but there’s just so much to say. Today, a podcasting company had to walk back from a catastrophically bad attempt to round out its website with AI-generated content. (It turns out this was the source of at least some of the “AI will never replace writers” comments I alluded to last week.)
Podnews Daily reported yesterday morning that Goodpods, one of the smaller sites where you can listen to podcasts, had created AI-generated descriptions for every podcast on its site — and they were so terrible that podcaster outcry forced Goodpods to quickly take them down.
Perhaps the most appalling part is that the company initially responded by telling podcasters they were welcome to use the “edit” button to fix the descriptions.
Creating garbage and then asking the people whose content fuels your site to fix it is not a way to win friends and influence people. (Also, apparently not every page had an “edit” button.)
James Cridland from Podnews Daily found many bad examples in cached pages after they were taken down. Some were funny or obvious errors, but some were offensive:
They described PJ Vogt’s show “Search Engine” as being about SEO, but it’s more of a general interest show. PJ describes it as answering “the kinds of questions you might ask the internet when you can’t sleep.“
They described a show about violence against women and sexual assault as a “love and dating advice show.”
The language was also absurd. As I told you recently, you can endlessly tweak the tone of AI-generated output, but Goodpods apparently didn’t get the memo because I can’t believe anyone would see text like this and decide “Yes, this is what we want all over our site”:
“Imagine the podcast world like a bustling city - each story its own skyscraper, every episode a window peering into another reality. This show is perfect for all - the creators curating compelling content for inquisitive ears, the ardent podcast fans eager not to miss a beat, and every newcomer dipping their toes into the vast ocean of podcasts.”
This isn’t an example of AI failing; it’s an example of people failing. I’m sure they could have used AI to add better descriptions to their website, but it still requires actual work. The question isn’t “Can we add more content to our website essentially for nothing?” It’s “Can we add more content to our website for some lower cost than before, and if so, is that worth it to us?”
After laughing and writing before I’ve even had all my coffee, I settle in to feeling angry because I believe people need to learn to use this technology so they keep their jobs and their clients — that’s why I write this newsletter — and stories like this make people who haven’t tried AI yet, or who don’t read widely about it, think it’s not a threat.
On the other hand, it’s a great case study to show your boss or client. ChatGPT and the like aren’t magic human-replacement tools that you can use well out of the box. Companies still need people who have trained themselves to use these tools and how to communicate with stakeholders — hopefully, that’s you.
Tip: ChatGPT Defaults to APA Style
A tidbit that from the Nature paper in the last newsletter that might be particularly interesting to writers and editors is that ChatGPT uses APA format unless otherwise instructed.
I know people who swear by using ChatGPT-4 to quickly format citations in different styles. However, in the Nature study, about 40% of the citations had minor formatting errors.
Formatting was not the primary focus of the study, so it’s possible that if you give these tools more specific formatting prompts they will do a better job. But even if they don’t, a 60% success rate will still save you a ton of tedious work. Just consider it a first pass and be sure to clean up the output.
News
The Sam Altman debacle
Friday afternoon Open AI’s board suddenly fired its CEO, Sam Altman, and then found itself with an open revolt among its executives, employees, funders, and partners. By Monday, 500 of the 700 employees had signed a letter asking the board to resign, and Microsoft (OpenAI’s biggest investor) had hired Altman. This is a fast-moving event that some are calling the biggest Silicon Valley story in decades, and things could change again before I send the newsletter. How this will affect ChatGPT remains to be seen. Pretty much every outlet in the world has written about it, but here’s a summary from CNN.
Meta brings editing to AI image generation
Meta’s has a nifty new AI image editing tool although it’s only a research presentation right now. Often the first image you get out of something like DALL*E isn’t exactly what you want. With the new system, instead of starting over, it looks like you can just tell it the changes you want: Make the dog a boxer instead of a poodle. Make the background a beach scene, etc. Check out the Meta page for impressive examples (keeping in mind that these are the best examples they probably have, and it’s not live yet). — Meta
AI is bullshit
Jeff Jarvis has a fascinating run-down and thoughts from the recent World Economic Forum AI summit about safety, development, what's possible, the potential benefits and downsides, and more.
His decades of work on other new technologies (including the history of print) give him an interesting perspective. He’s not anti-AI, but these are a couple of sentence that stood out to me:
The full effect of a new technology can take generations to be realized.
A machine that is trained to imitate human linguistic behavior is fundamentally unsafe.
I recommend the whole thing. — BuzzMachine
What is AI sidequest?
Using AI isn’t my main job, and it probably isn’t yours either. I’m Mignon Fogarty, and Grammar Girl is my main gig, but I haven’t seen a technology this transformative since the development of the internet, and I want to learn about it. I bet you do too.
So here we are! Sidequesting together.
If you like the newsletter, please share it with a friend.
* I dislike the term “hallucinate” because it implies these tools are thinking, but it seems to be the term that is becoming the standard for describing this kind of error.
Written by a human. Copyright 2023, Mignon Fogarty, Inc.