Of all the things artificial intelligence could do for us - accelerate drug discovery, model climate solutions, help engineers design safer infrastructure - we seem most excited about getting it to write our emails. That tension sits at the center of a growing cultural debate, and a recent piece in Fast Company frames it with refreshing bluntness: AI writing might be the bleakest use case for a technology with genuinely extraordinary potential.
Two camps, one very loaded question
The conversation has split people into two pretty entrenched positions. Defenders of AI writing argue it's a practical tool - a way to turn rough notes into polished copy faster, eliminate typos, and clear the mental backlog of everyday communication. For people who struggle with writing, or who simply have more ideas than time, that's not a trivial benefit.
But critics push back on something harder to quantify. There's a feeling - and it's a widely shared one - that outsourcing writing to a chatbot violates something worth protecting. Writing isn't just output. It's thinking. It's the friction of finding the right word that forces you to figure out what you actually mean.
The cognitive cost nobody's tallying
That's where the more interesting anxiety lives. It's not really about whether AI text sounds good enough. It's about what happens to us when we stop doing the work ourselves. Language shapes thought. If we hand off the drafting, are we also quietly outsourcing the reasoning behind it?
There's a memorable image in the Fast Company framing - the idea of humans becoming "cognitively deflated Sims," going through communicative motions without much happening underneath. It's a little dramatic, sure, but it gestures at something real. Heavy reliance on any tool changes how we develop the underlying skill. We already know this from GPS and spatial reasoning. The question is whether we're okay with it happening to language.
What actually gets lost
The strongest version of the critic's argument isn't that AI writing is bad because it's lazy. It's that writing is one of the core ways humans process experience, build arguments, and connect with other people. A world where that gets increasingly delegated to predictive text engines is a world that's given something up - even if the output looks fine on the surface.
None of this means AI tools are inherently evil, or that using one makes you a bad person. But it does suggest we could stand to be more deliberate about when and why we reach for them. Speeding up a task is one thing. Replacing the act of thinking through a shortcut is something worth pausing on.
The technology isn't going anywhere. The question is just whether we want to be the ones doing the writing - or merely signing off on it.





