If you've been following the twists and turns of AI policy in Washington, brace yourself for another one. According to a report from TechCrunch, Trump administration officials may be actively encouraging major banks to test Anthropic's latest AI model, called Mythos. On the surface, that sounds like a fairly routine tech adoption story - until you factor in one rather significant wrinkle.

The Department of Defense recently designated Anthropic as a supply-chain risk. So the idea that other arms of the same administration might be nudging financial institutions toward the company's products is, to put it mildly, a confusing signal.

Why this matters beyond the headlines

The financial sector isn't just any industry when it comes to AI adoption. Banks handle sensitive personal data, manage critical infrastructure, and operate under intense regulatory scrutiny. Encouraging them to pilot a specific AI model - particularly one flagged by a major government agency - raises legitimate questions about how coherent U.S. AI policy actually is right now.

It also puts banks in a genuinely awkward position. If you're a compliance officer at a large financial institution and you're getting mixed messages from the government about whether a particular AI vendor is trustworthy, that's not a fun place to be. Do you follow the encouraging nudge, or do you pay attention to the security designation?

Anthropic's complicated moment

Anthropic has positioned itself as one of the more safety-conscious players in the AI race, which makes the DoD's supply-chain risk label particularly eyebrow-raising. The company has generally cultivated a reputation for responsible development - so whatever is driving that designation, it hasn't been made fully public.

The Mythos model itself hasn't been widely detailed in public reporting, so it's unclear what makes it specifically attractive for banking use cases. But the fact that officials are reportedly promoting it suggests there's real interest in seeing it stress-tested in high-stakes, data-heavy environments.

Reading the tea leaves

What this story really illustrates is how fragmented AI governance feels right now. Different agencies appear to be operating with different risk assessments of the same company, and the private sector is left trying to navigate the contradiction. For anyone watching how AI gets woven into critical industries, this is a story worth following closely - because the decisions being made now about which tools banks adopt and trust will have a long tail.

The full picture here is still developing, but the tension between the DoD's warning and the reported encouragement from other officials is a reminder that there's no unified playbook for AI in Washington yet - and that matters for all of us.