If you've been anywhere near a tech conversation lately, you've probably heard the hype: AI tools are supercharging developer productivity, cranking out code at speeds no human could match. But a closer look at how teams are actually using these tools tells a more complicated story.

A new report from TechCrunch highlights a growing problem in AI-assisted development called 'tokenmaxxing' - the practice of prompting AI models to generate as much code as possible in a single output. The idea is intuitive enough: more code, faster, equals more productivity. Right?

The hidden cost of more code

Not quite. The issue is that while tokenmaxxing does produce a higher volume of code, that code tends to be bloated, harder to maintain, and frequently requires significant rewriting before it's actually usable. You end up with a lot of output, but a lot of it is noise.

Think of it like asking someone to write a report and having them hand you 80 pages when you needed 15. Sure, they technically did a lot of writing - but now you're the one doing the real work of figuring out what's useful.

The cost problem compounds this further. AI tools charge based on token usage, meaning that prompting for maximum output isn't just potentially wasteful in terms of time - it's also more expensive per useful line of code produced. Teams that lean heavily into tokenmaxxing may find their AI tooling bills climbing even as the quality of their codebase gets harder to manage.

Productivity theater vs. real output

What's happening here is a version of what you might call productivity theater - the appearance of doing more, backed up by impressive-looking metrics, that doesn't quite translate to real-world results. Lines of code generated isn't the same as features shipped or bugs fixed.

This doesn't mean AI coding tools aren't valuable - they clearly are for a lot of teams. But it's a useful reminder that the way you use a tool matters just as much as whether you use it at all. Prompting more thoughtfully, asking for targeted outputs rather than sprawling ones, and building in proper review time can make a significant difference in whether AI assistance is actually helping or just generating extra work.

For anyone managing a dev team or thinking about how to integrate AI tools into a workflow, the TechCrunch piece is worth a read. The productivity gains are real - but so are the pitfalls if you're not paying attention to how those gains are actually being measured.