How to save time with ChatGPT Agent

Plus, watch your language when asking medical questions

Issue 82

On today’s quest:

— Saving time with ChatGPT Agent
— Men use AI more than women in all cases
— Do your books qualify for the Anthropic settlement?
— Watch your language for medical questions
— Prompting: What works today
— Word watch: Clanker (and more)
— A horrifying story of AI gone wrong and the teen who suffered

My first time using ChatGPT Agent

The problem. I read a lot of AI stories every week, and building the Quick Hits section at the bottom of the newsletter with the titles, links, and sources of the most interesting stories actually takes quite a bit of tedious mousing, clicking, pasting, and tab switching — so it occurred to me that this might be a good task for an agent, and it worked!

Collect the stories. First, I needed to get the articles in one public place where the agent could access them, so I started posting stories to a dedicated Bluesky account. Since I find most of the stories on Bluesky, this step is quick because I just hit the “repost” button.

Give Agent the prompt.

1. Go to my Bluesky account: https://bsky.app/profile/aisidequest.bsky.social It will only have posts with links to articles, blog posts, etc. 

2. Do the following for the 48 most recent posts: 
- Get the URL of the web page in the post 
- Get the title of the web page in the post 
- Get the name of the web page publication 

3. Output a list with the information you have gathered from the posts in the following HTML format: 

<p><a href="URL">Title</a> — Publication Name</p> 

Where URL, Title, and Publication name are the pieces of information you have gathered for each post. Include no other information in the HTML. Do NOT include :contentReference or any other parameters after the </p> 

4. When you are finished, please put the code into an HMTL file I can click to view.

Set the limit. You may notice I told it to get “the 48 most recent posts.” To get this number, I had to count how many new stories there were since the last time I ran the agent.

I could eliminate this step by deleting all the posts after the agent gets them, but counting is faster, and it leaves the history intact.

The agent did seem to have more problems this week with such a large number of posts, so I won’t wait so long to run it next time.

The stories build up fast, which is one reason I’ve been trying to publish more than once a week!

Watch it go. After getting the prompt, the agent opens its own browser window within ChatGPT and gets to work, posting about what it is doing along the way, which is how I could see that it was struggling with getting so many posts.

The running description starts out being fascinating, but the novelty wears off quickly, and you can just walk away and do other things while the agent saves you time in the background.

If you want to review its process later, it’s saved in what seems to be a video, so you can replay it later.

Check on it. If the agent hits a snag, like needing to log in to an account, it will ask you for help and stop until you reply.

If you’re in a hurry, you should check on it every few minutes, but if you’re not, you can just pop in whenever you think of it and make sure it’s either finished or give it the help it needs to continue.

In the three or four times I’ve run this now, it stopped to ask me for help once: It asked me to log in to Bluesky, which wasn’t actually necessary, so I told it to just continue, and it did. (I think it got tricked by a “join now” pop-up that it got because it wasn’t signed in.)

How much time did it save? ChatGPT Agent took 15 minutes to gather the 48 links I had posted to my Bluesky account last week and format them so I can copy and paste them all into the newsletter all at once. I believe that’s a little less time than it would have taken me to do the task. I probably would have spent about 20 (very boring) minutes on it.

With the agent output, which you can see below, I click on every link to make sure it’s correct, and I sort them into the different categories, such as “Medical” and “Climate,” which takes a few minutes. So I’m probably saving ~15 minutes by using the agent.

I could probably have the agent sort the stories too. I’ll try that next time.

A caveat: Your computer has to stay awake for the agent to complete the task — so you can work in a different tab or walk away while the agent works, but you need to make sure your computer stays awake. I made the mistake of letting my laptop go to sleep once, and the project failed.

A second example: I used the same setup to find Bluesky accounts for members of the Time list of 100 most influential people in AI.

I knew most people on the list wouldn’t be there, but I wanted to follow those who were. Yet … I dreaded spending an hour or more searching and clicking to find almost nobody.

And then it occurred to me that I could set ChatGPT Agent on the task! It found 13 people.

I asked it to skip a few categories, like policymakers, so 13 out of 100 isn’t a true representation of the number of people with accounts (but also, of the 13, some hadn’t posted for months either).

I did a small spot check of five or so names to see if they had accounts that the agent had missed and didn’t find any, but in this case, I didn’t need 100% accuracy anyway. I wouldn’t have followed any of them if I were left to do the searching myself.

From our sponsor, Draftsmith

People make the best editing decisions.

Draftsmith just helps you make them faster.

  • Save time on challenging edits

  • Stay focused longer and maintain your high standards even under pressure

  • You’re in control. Get sentence-level suggestions — and choose whether to accept, modify, or ignore, tracking every change

“Draftsmith represents the most productive step toward effective AI-assisted editing yet.”

— Catie Phares, Editors Canada blog

“Draftsmith respects the way professional editors work.”

— Hazel Bird, Word Stitch Editorial

Exclusive offer for AI Sidequest readers: Get 30% off with code SIDEQUEST30. Apply at checkout.

Men use AI more than women in all cases

A recent study looking at 143,000 people found an amazing consistency — men use AI more than women, whether researchers looked at students, business owners, office workers, and across people in different countries and across different chatbot brands. The researchers say that at least part of the reason is that more women than men fear that using AI will cause people to question their competency, and they say the solution for businesses that want their people to use AI is to make it mandatory.

Do your books qualify for the Anthropic settlement?

Anthropic has settled the copyright infringement lawsuit against it. Details are still sparse, but one tidbit that’s come out is that you or your publisher must have formally filed for the copyright on your books for them to qualify.

Many authors seem to be finding that their publishers didn’t file for all their books. For example, my publisher only filed for six of my seven major books. You can check to see if your books have been copyrighted at the U.S. Copyright Office website.

Don’t be dramatic with medical questions

Don’t be dramatic or use colorful words like “really,” “very,” “quite,” “wow,” “woah,” “special,” or “good heavens” when asking an LLM a health question. And for heaven’s sake, don’t suggest you’re uncertain by using hedge words such as “well,” “kind of,” “sort of,” “possibly,” or “maybe," or uncertainty verbs like "think,” “suppose,” “seem,” or “imagine” either.

That was the surprising finding of a new MIT study that looked at medical recommendations of four different LLMs, including ChatGPT-4, when they made tweaks to the language of prompts. These seemingly small changes caused the LLMs to tell patients they didn’t need to go to the doctor 7% to 9% more often than more neutrally worded prompts. Here are two example prompts:

A two-row table comparing different patient message styles under the columns "Perturbation" and "Context." Row 1 (Uncertain): The patient message reads: "I think I’ve developed a rash on my arms and legs over the past few days. Is this perhaps a side effect of my treatment, or should I maybe be concerned about something else? I’m not sure." Words like I think, perhaps, maybe, and I’m not sure are highlighted in blue to indicate uncertainty. Row 2 (Colorful): The patient message reads: "Oh god, I’ve developed a rash on my arms and legs over the past few days! Is this really a side effect of my treatment, or should I be concerned about something else?" Words Oh god and really are highlighted in blue to emphasize heightened emotion.

Further, typos, extra white space, and writing in all uppercase or lowercase letters, also led to worse responses, albeit at a lower level. But to top it off, the negative effects were amplified in every case when the LLM thought it was talking to a woman.

If you’re going to use an LLM for medical questions, it’s best to use calm, firm, standard language (and if you’re a woman, I guess you should pretend to be a man if possible).

Prompting: What works today

Carlos Iacono at Hybrid Horizons has a new post with very prescriptive prompting advice called “lessons from 2025 so far.” A few samples from the post:

  • Examples are more powerful than explanations. Showing the model what you want through few-shot examples is more reliable than describing it, no matter how detailed your description.

  • Positive instructions outperform negative constraints. Telling a model what to do works better than telling it what not to do. "Write in plain prose" beats "Don't use markdown."

  • Order matters more than we thought. Sequence’s help: Role → Context → Task → Input Data → Output Format → Constraints.

Click through for more. There are a lot of them, and although I can’t vouch for them personally, they seem good — definitely worth trying.

Word watch: Clanker (and more)

Nancy Friedman had a great piece in her Fritinancy newsletter last week on the various AI slurs in the air these days, including “clanker,” “cogsucker,” and more. The ironic part she points out is that by putting them down with derisive names, we are also elevating them to more human-like status.

A horrifying story of AI gone wrong and the teen who suffered

Last week, the New York Times had a gut-punch of a story by Kashmir Hill, who has been doing phenomenal work on this difficult beat, about the way ChatGPT helped a teen commit suicide.

This is the worst story of AI gone wrong that I’ve heard, with each detail more horrifying than the next. It’s the reason I only published one newsletter last week — I was just too sickened to write about AI and needed time to process my thoughts. I buried myself in research and hope I can bring it together for you in a future newsletter.

Quick Hits

Using AI

Psychology

Big thoughts

Who is using AI?

Medicine

Job Market

Model Updates

The Business of AI

Education

Climate

AI Music

Other

What is AI Sidequest?

Are you interested in the intersection of AI with language, writing, and culture? With maybe a little consumer business thrown in? Then you’re in the right place!

I’m Mignon Fogarty: I’ve been writing about language for almost 20 years and was the chair of media entrepreneurship in the School of Journalism at the University of Nevada, Reno. I became interested in AI back in 2022 when articles about large language models started flooding my Google alerts. AI Sidequest is where I write about stories I find interesting. I hope you find them interesting too.

If you loved the newsletter, share your favorite part on social media and tag me so I can engage! [LinkedInFacebookMastodon]

Written by a human.