Here's a thought that might make you look at your phone a little differently: that apology from your friend, the check-in from a coworker, the sweet message from someone you're dating - it could have been written by ChatGPT, and you'd probably never know.

That's the unsettling takeaway from new research published by Fast Company, where researchers recruited more than 1,300 Americans between the ages of 18 and 84 to examine how we judge people based on their writing in the age of AI. The results? Most of us aren't just bad at spotting AI-generated personal messages - we're not even thinking to look.

The blind spot we didn't know we had

The researchers, including Jiaqi Zhu, showed participants AI-generated messages - things like a personal apology sent over email - and split them into groups, some with context about AI involvement and some without. The finding that stands out most isn't about detection rates. It's that the majority of people simply don't consider the possibility that a personal message could be AI-generated in the first place.

And here's the twist: this applies even to people who use AI to write messages themselves. We know the tool exists. We use it. And we still don't think to question the authenticity of what lands in our inbox.

Why this actually matters

It's tempting to brush this off as a quirky tech finding, but the implications run deeper than they first appear. Personal messages carry emotional weight precisely because we assume they reflect a person's genuine effort and feeling. When someone takes time to write a thoughtful apology or a vulnerable message, we factor that effort into how we receive it.

If AI can replicate the form of that effort without the substance, it quietly shifts what communication actually means between people. Trust, intimacy, accountability - these things are partly built on the assumption that the words someone sends you are actually theirs.

This isn't about demonizing AI writing tools - plenty of people use them for totally reasonable purposes, like polishing a message when English isn't their first language or working through social anxiety. But the gap between how often this is happening and how often we're aware of it is worth paying attention to.

What you can do with this

You probably can't train yourself to detect AI writing reliably - the research suggests most people can't, at least not without being primed to look. What you can do is be a little more intentional about your own communication, and maybe think about what you actually want the messages you send to say about you.

In a world where AI can handle the words, showing up with your own might be the most human thing you can do.