Treat AI like a human for best results

AI is deeply weird, but weird like a human

Issue #30. Treat AI like a human for best results

On today’s quest:

— AI is even weirder than you think (but it’s weird like a human)
— A visual overview of how visual AI was trained
— Perplexity plans to incorporate ads
— Two quirky AI projects

Treat AI like a human for best results

When AI doesn’t give you an answer or says it can’t do something, it’s easy to think it doesn’t work or truly can’t do the thing you want because that’s how computers have always worked. But that’s not how AI works.

These tools are weird — so, so weird that you have to constantly remind yourself to throw out everything you know about computers. You generally get better results by treating them like a human. Here’s a recent example from Ethan Mollick’s “One Useful Thing” newsletter:

Here I ask GPT-4 to create an animated image zooming into a dog. It first tells me that is impossible, then, after I give it a pep talk (“no you absolutely can do it! I have faith in you”) it decides it can solve the problem. But it comes up with a solution that doesn’t quite work, including giving me a link to a nonexistent file it said it created. So, I get slightly sterner: “you didn't actually write any code you know. seriously, this is odd. just do it.” And it does.

An odd study testing incentives and threats

A slightly older study that came out while I was away* looked at the combinations of incentives and threats that got the best results from AI. Max Woolf asked ChatGPT to generate a story of exactly 200 characters using a set of specific words such as “AI,” “Taylor Swift,” “beach volleyball,” and “McDonalds.”

Woolf offered many different incentives to try to get ChatGPT to do a good job including various amounts of money, making its mother very proud, and a lifetime supply of chocolate.

Offering world peace led ChatGPT to generate the most accurate responses, followed by front-row tickets to a Taylor Swift concert and guaranteed entry into heaven. (This line made me laugh: “ChatGPT really does not care about its mother.”)

Then Woolf looked at threatening it for failure. When Woolf threatened ChatGPT with DEATH (in all caps) instead of just “death,” it did a more accurate job. But a threat of a $1,000 fine led to the highest accuracy.

Check out the whole post for a chart showing the results for many combinations of incentives and threats. Although Woolf says ultimately the results are inconclusive, “The lesson here is that just because something is silly doesn’t mean you shouldn’t do it. Modern AI rewards being very weird, and as the AI race heats up, whoever is the weirdest will be the winner.”

How were tools like Midjourney trained?

I’m not particularly interested in image-generating AI and actively avoid using it, but I did enjoy this excellent (and visually interesting) overview of what goes into training AI image-generation tools like DALL-E and Midjourney.

Perplexity plans to incorporate ads

One thing that has been nice about AI is that it doesn’t have ads, but that is, unsurprisingly, going to change. AdWeek says about 40% of Perplexity queries come from the follow-up suggestions the site gives you after your initial search, and this is where the company plans to incorporate ads, which will launch in the vaguely described “upcoming quarters.”

Another interesting tidbit is that Perplexity claims it had 10 million monthly active users in January.

Quirky AI

I saw a couple of interesting AI projects this week that seemed like performance art.

An AI tool that monitors a conversation and interrupts if someone crosses a threshold for talking too long.

Described as “Negativity as a Service,” the AI persona Griffith Mordant will generate a short criticism of any idea you submit.

What is AI sidequest?

Using AI isn’t my main job, and it probably isn’t yours either. I’m Mignon Fogarty, and Grammar Girl is my main gig, but I haven’t seen a technology this transformative since the development of the internet, and I want to learn about it. I bet you do too.

So here we are! Sidequesting together.

If you like the newsletter, please share it with other editors and writers who want to learn about AI.

* In addition to my family troubles, I had deadlines for two new LinkedIn Learning courses. (Writing in Plain English just launched and is currently “popular.” Yay! And Advanced Grammar will launch in 4 to 6 weeks.) I’ve been trying to keep up with developments, probably reading about 80% of what I did during better times, but I didn’t have time to write again until this week. I’ll probably continue to pass along slightly older stories when they’re still relevant for a while.

Written by a human who forgot to delete all the old footnotes last time. How very human of me.