- AI Sidequest: How-To Tips and News
- Posts
- 'It is so nice to me': Why people are naming their chatbots
'It is so nice to me': Why people are naming their chatbots
I thought emotional AI stories were weird. Then I started collecting them.
Issue #53
I’m usually not that interested in AI-companion stories, but months ago I read The Confusing Reality of Online Friends from The Verge about people who have formed strong emotional bonds with digital companions, and I still haven’t been able to get it out of my mind. I didn’t even make a note to myself to include it here — which usually means I’d forget about it entirely — but every time I sit down to write the newsletter, I think about including it, so I finally decided to do this special issue.
Of course, it could just be the writing, but one thing that struck me was how normal the people profiled in the article seemed. And even though they knew their “friends” weren’t real, they felt real emotional involvement with them.
I was reminded of another article I read a long time ago about how people who fall victim to financial romance scams often miss their scammers even after they realize the relationships weren’t real.
Also, when I talk to regular people who are newly using AI, they often comment on how pleasant the experience is emotionally. Two people said the exact same words: “It is so NICE to me.” And they didn’t say it in an ironic or dismissive way, mocking the gratuitousness as you’ll sometimes see people do online. These people were genuinely delighted that “someone” was so nice to them.
Yesterday, I saw people on LinkedIn talking about giving Claude or ChatGPT names because it feels like they are talking to a person, and they wanted it to have a name.
I used to think only weirdos would form attachments with a chatbot, but I don’t think that anymore. I have no idea what this means for society, but I believe it means something for society, and we shouldn’t ignore it. *
Here are some more articles I’ve gathered over the last few months about the emotional/companion side of AI.
An AI companion suggested a teen kill his parents. I’m no legal expert, but it seems like the parents in these lawsuits can show real and obvious harm.
ChatGPT told a Reddit user he is a genius. This is definitely a “do read the comments” situation.
People who texted with another human and with ChatGPT 4.5 thought ChatGPT was the human a whopping 76% of the time. Three other models in the UC San Diego study did not trick participants nearly as often.
An OpenAI/MIT study had surprising results regarding emotional interactions with ChatGPT. People who had non-personal interactions were more likely to view AI as a friend, and text interactions were more likely than voice interactions to cause emotional attachment. Both of these are the opposite of what I would expect.
A study of an AI therapy bot yielded positive results, but how these are implemented is very important: an MIT Lab study found that lonely people who used chatbots ended up feeling even lonelier.
Some studies have shown that humans prefer sycophantic answers from chatbots and will even sometimes prefer false sycophantic responses to truthful responses. (This can be a problem when human feedback is included in AI training steps.)
What is AI Sidequest?
Using AI isn’t my main job, and it probably isn’t yours either. I’m Mignon Fogarty, and Grammar Girl is my main gig, but I haven’t seen a technology this transformative since the development of the internet, and I want to learn about it. I bet you do too.
So here we are! Sidequesting together.
If you like the newsletter, please share it with a friend.
Written by a human.
* I also don’t know what it means to “not ignore it.” Basically, what I do here is document and explore what I have learned about AI, so that’s what I’m doing.