AI cat and mouse

What typos and interviews reveal about AI and humans

Issue #27. Humans and AI evaluating each other

Hey, everyone. It’s a short newsletter today. I’m still playing catch up and also have some big Grammar Girl deadlines for my next LinkedIn Learning course, so the newsletter will continue to be light for a few weeks. Then, I hope to be back in force!

On today’s quest:

— AI, typos, and language change
— AI interview screening

More on AI detectors and typos

I recently mentioned an anecdote I heard about students using AI tools to rewrite their text to avoid getting flagged by AI detectors, and I knew it was a genre of tools, but wasn’t sure how widespread it was.

Well, last week, I saw Humanizer Pro in the #2 spot in the writing section of the GPT store.

I live in a glass house, and it’s not there’s never been a typo on my site, but it does seem funny to find a typo in the description for a tool that is supposed to make its users sound human, especially since that I’ve also seen people talk about intentionally introducing typos to prove they are human.

Then yesterday, I saw a typo in the output from Gemini (the rebranded and zhuzhed up version of Google’s Bard): “viloation” for “violation.”

I don’t think I’ve ever seen a typo in the output of an LLM, which makes me wonder if this is actually a line written by a human. It had seemed like I’d hit an “I’m sorry, Dave, I’m afraid can’t do that” kind of guardrail with my prompt, so it could be that the system has some human-written error messages.

Then, when I posted about this on LinkedIn, Tina Guide posted a comment about something I think could be a more widespread problem with these chatbots: it seems as if they are incorporating common errors such as improperly using reflexive pronouns in sentences such as “Send the report to Dave and myself” instead of “Send the report to Dave and me.”

I’ve long thought* that these chatbots have the potential to speed up the acceptance of common error like this in the same way Gmail and Hotmail sped up the acceptance of “email” without a hyphen.

AI Interviews

Companies are using AI to screen people for interviews, and interviewees are using AI to cheat on interviews.

First, I learned about AI video screening from my friend, whose college-aged daughter has had to record three separate interviews that will be evaluated by AI that will determine whether she eventually gets to talk to a human.

To be clear, she isn’t talking to anyone in the recording, but is instead speakign into the camera answering questions. Recording started the moment she received the first question.

Next, I saw a piece showing how easy it is for people to use AI to cheat during technical interviews. These are live interviews that ask people how they would solve specific technical problem, such as how they would write a program to do something. In the example video, an engineer had ChatGPT open in another screen while doing the interview. It appears he used voice recognition to feed the question to ChatGPT, and then he gave the interviewer the answer ChatGPT provided.

In the study, 73% of interviewees passed the interview when given the type of standard question often given in these interviews. When given more custom questions, only 25% passed the interview. It sounds like companies need to get more creative with their questions.

What is AI sidequest?

Using AI isn’t my main job, and it probably isn’t yours either. I’m Mignon Fogarty, and Grammar Girl is my main gig, but I haven’t seen a technology this transformative since the development of the internet, and I want to learn about it. I bet you do too.

So here we are! Sidequesting together.

If you like the newsletter, please share it with a friend.

*  “Long” being 6 or 7 months in AI time.

Written by a human.