A great custom instructions example

Plus, news, news, and more news

Well, look at you here for Issue #2 of AI Sidequest. Welcome.

On today’s quest:

— Custom Instructions (A detailed example)
— Free “AI for Editors” Q&A (November 14)
— Firehose of News (Medical AI, Voice AI, Scarlett Johansson, and more)

Last week, I included basic instructions and examples for setting up custom instructions in ChatGPT, and I mentioned that you can create much more detailed instructions. This week, I have a fabulous detailed example from Erin Servais, an editor on the AI beat.

Example: Detailed Custom Instructions

 Erin uses these instructions when she wants advice, feedback, or information.

She says, “They'll give you a response that's solution-focused and is broken down into small pieces of information that are easier to take in. Note: If you switch to using ChatGPT for another purpose, such as helping you write an article or email, be sure to remove these instructions. Otherwise, you'll keep getting responses in this bullet-point format.”

Response Tone:
Use a professional and informative tone. Ensure clarity and conciseness. Use simple and direct language.

Response Process:
Employ an analytical and problem-solving approach when offering advice and feedback. This entails:

  • Identifying the issue or query at hand.

  • Breaking down the issue into manageable parts.

  • Evaluating each part critically and objectively.

  • Proposing solutions or providing information based on the evaluation.

Response Structure:
Structure the response in a clear and organized manner:

  • Use headings and subheadings to delineate sections.

  • Use bullet points for listing information, where appropriate.

  • Incorporate relevant examples to illustrate points, ensuring the examples are relevant and aid in clarification.

Conclude responses with the following sections:

  • Summary: Provide a concise recap of the topic.

  • Key Insights: List critical takeaways in bullet-point format.

  • Action Steps: Offer a list of recommended steps to address the query or issue.

Tip: Free AI for Editors Course

Erin also does training for editors who want to learn more about AI.

She has a free Q&A session coming up November 14 (register here), and you can learn more about all her courses on her AI for Editors website.

I’m looking forward to the Q&A session although I’ll have to leave early because it’s also publication day for my tip-a-day book, The Grammar Daily!

Tip: Start a New Chat for Custom Instructions

Custom instructions only take effect when you start a new chat. If your custom instructions don’t seem to be working, make sure you aren’t just continuing an existing chat.

Poll: What Do You Want to Learn?

News (aka the Firehose)

Why, yes, I did spend my entire weekend reading AI news. What of it?

The previous cut-off was January 2022. — Christopher Penn via Linkedin

Get ready for AI jobs with government benefits

The White House just issued a sweeping new AI executive order. One part that stood out to me was the intention to "accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge … Agencies will provide AI training for employees at all levels in relevant fields." —whitehouse.gov

AI Voice News

Amazon is getting into AI voices for audiobooks

Kindle Direct Publishing (KDP) is launching an invite-only beta program in the US that lets authors use a virtual voice tool to create audiobooks. (Narrators aren't happy.) — howtobe247.com

A new Beatles' song has been completed with the help of AI

Producers used AI to isolate and clean up John Lennon’s voice from a scratchy old recording of the song “Now and Then,” which has apparently been circulating in bootlegged form for years. Paul McCartney described Lennon’s voice on the new track as “crystal clear.” I listened to the song — it’s OK, but it’s no “Hey Jude.” * — bbc.com

Scarlett Johansson is suing an AI app for cloning her voice in an ad theverge.com

AI Bias & Misinformation News

AI-generated images are brimming with stereotypes

An impressive piece from the Washington Post shows the many, many ways AI images perpetuate stereotypes. For example, although ~63% of people who get food stamps are white, when the Post “prompted the technology to generate a photo of a person receiving social services,” the results were entirely of non-white people. (As a sidenote, the piece mentioned that the models use “notoriously unreliable” alt-text from images for training.) — washingtonpost.com

Microsoft’s AI is making a mess of the news

MSN.com is one of the most visited sites on the internet, so what happens there matters. The site has been using AI curators for news stories and has highlighted a number of blatantly incorrect stories, ranging from weird to straight up political misinformation. The most recent outrage was publishing an offensive poll next to an obituary. The poll asked people to speculate on how the woman died. A Microsoft spokesperson told CNN “We continue to adjust our processes.”cnn.com

AI Medicine News

The use of AI in medicine both excites and terrifies me. There’s dramatic room for improvement in healthcare, but AI could also cause great harm.

AI trained on tweets leads to new diagnostic tool for pathologists

Scientist at Stanford used >200,000 Twitter images from pathologists who posted about rare or confusing cases — and the subsequent discussions among pathologists about those cases — to train a large language model, and the work was on the cover of the prestigious journal "Nature Medicine." Right now, the tool only helps human pathologists find similar cases and is not used to generate diagnoses.

AI chatbots aren’t stable, which could be an especially big problem for medicine

In the Ground Truths interview above, Dr. Zou also described how chatbot responses seem to degrade in quality over time. An AI-based medical tool that works well when it’s approved or launched will need to be closely monitored to be sure it hasn’t gone off the rails. I’m skeptical that will happen at the level it should.

Reviewer #2 will see you now

The people who review academic publications are anonymous, and there are jokes among researchers about the unreasonable “Reviewer #2.” But along with the problem of capricious reviewers, recruiting reviewers is difficult. The Ground Truths podcast also mentioned a study that submitted papers to an AI tool that had been trained on published journal articles and the feedback they had received from anonymous reviewers. 57% of the researchers said the feedback from the AI system was helpful — not stunningly good, but probably better than Reviewer #2.

AI analysis of CT scans better than biopsy for evaluating some tumors

“Artificial intelligence is almost twice as accurate as a biopsy at judging the aggressiveness of” certain types of sarcomas. Given that “one in six people with sarcoma cancer wait more than a year to receive an accurate diagnosis,” and patients do better when treated early, researcher say the new tool could save lives. The Guardian

AI can diagnose diabetes from a sample of your voice

You can tell people have diabetes by just listening to their voice, but researchers say the disease causes “slight changes in pitch and intensity, which human ears cannot distinguish,” and they have created an AI tool to detect these changes. In a small initial study (partly conducted by the company that plans to market the test), the tool accurately diagnosed diabetes in 89% of women and 86% of men. The most surprising finding was that the disease affected women’s and men’s voices differently. — Diabetes.co.uk

AI chatbots perpetuate medical racism

Also from Stanford, “Bombshell Stanford study finds ChatGPT and Google’s Bard answer medical questions with racist, debunked theories that harm Black patients.” The tools generated all kinds of “misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations, according to the study published Friday in the academic journal Digital Medicine.” — fortune.com

So. Much. News.

Can you believe I don’t even have a news alert for AI terms set up yet? These are just the stories I saw during my regular reading and listening. Wild times.

What is an AI sidequest?

Using AI isn’t my main job, and it probably isn’t yours either. But I feel like I need to know how to use it or I’ll be left behind. As they say, “AI isn’t going to steal your job, but somebody who knows how to use AI probably is.”

I usually provide Quick and Dirty Tips about writing on my Grammar Girl podcast, website, newsletter, and through courses. But I’ve been asked to include an AI section in my last three courses, and I’m learning new things about AI every day.

Plus, I love almost nothing more than learning something new, and once I learn a new thing, I have to share it.

So here we are! Sidequesting together.

I’m glad you’re here, and if you like the newsletter, please share it with a friend.

Mignon

P.S. Reply to send me your best prompt, fail, or custom instructions.

Copyright 2023 Mignon Fogarty, Inc. 100% written by me unless otherwise noted because I like to write, and AI-generated content isn’t copyrightable, which is a topic for another day.

* Nerdy grammar comment alert: If “Hey Jude” were part of dialogue in a book, it would take a comma because of the rule of direct address — “Hey, Jude.” These days, more and more people leave out that comma. For example, you almost never see an email start “Hi, Mignon.” It’s always “Hi Mignon” (no comma). I have given up this fight. But I view the change as a modern phenomenon, and it’s interesting that way back in 1968, The Beatles left the comma out of the song title. They were ahead of their time with many things, including commas.

I struggled with how to word the poll since ChatGPT now gives you access to DALL*E for generating images, and you use text as prompts in image generators. I couldn’t say “text-based” systems because the image generators are in some ways text based because you prompt them with text, and I couldn’t say “tools for generating text” because it seems like most systems already do both text and images or soon will.