Anthropic
The announcements capture a tension that may define the company’s trajectory. Anthropic was founded in 2021 by former OpenAI executives who left specifically over concerns that commercialization was overwhelming safety commitments. Now, five years later, they’re racing toward the same hyper-growth model they once rejected, while also advocating for the regulations that would constrain that race.
At issue is whether a company can simultaneously pursue a $380 billion valuation while maintaining the values that justified its existence.
The PAC Battle and Its Economics
The $20 million matters less for its size than for who it opposes.
Public First Action exists specifically to counter __Leading the Future, a super PAC that’s raised over $125 million__from OpenAI president Greg Brockman, Andreessen Horowitz, and other AI industry leaders. Leading the Future’s mission is blunt: oppose state-level AI regulation in favor of a light-touch federal framework—or ideally, no framework at all. They’re already running ads against candidates who support AI guardrails.
Anthropic’s donation puts it on the opposite side of that fight. They’re backing candidates who want transparency requirements, state-level protections, and restrictions on the most powerful AI models.
This is where the economics get uncomfortable. Regulatory compliance doesn’t hurt all companies equally.
When Anthropic advocates for regulations requiring AI companies to “demand real transparency from the companies building the most powerful AI models,” they’re advocating for requirements they already meet. Every new mandate raises the barrier to entry; every transparency requirement favors companies with existing measurement infrastructure.
Whether that’s the intent or merely the effect is the question that defines the company’s credibility.
Research as Both Commitment and Competitive Advantage
Anthropic’s Societal Impacts team offers one lens into whether the company can maintain its founding principles while pursuing aggressive growth. The group studies how AI like Claude is actually used in the world; how people rely on AI for advice, emotional support, education, and work, and what social and economic patterns emerge as AI becomes more embedded in daily life.
The work appears to be rigorous.
But every finding about AI’s societal effects becomes data that can support regulatory arguments. When Anthropic documents how Claude usage correlates with socioeconomic status, that research can justify policies requiring equitable access. When they study emotional dependency patterns, those findings support transparency requirements around AI companionship features. When they measure job displacement impacts, the data feeds directly into debates over AI governance.
Anthropic advocates for “meaningful safeguards” and “real transparency from the companies building the most powerful AI models,” which, here again, are requirements they’re already positioned to meet. Their Societal Impacts infrastructure becomes a competitive advantage the moment it becomes a regulatory requirement.
Measuring real-world impact invites uncomfortable findings. It also creates a feedback loop where principled research generates evidence that happens to justify regulations benefiting the company conducting it. Whether the research informs Anthropic’s product decisions or primarily serves its policy positioning remains an open question.
When Safety Researchers Leave
Maintaining stated values while pursuing hypergrowth has costs.Three days before Anthropic announced its $30 billion funding round,
His resignation letter,
What he did say was more revealing: “Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most.”
The timing is interesting.
Sharma’s letter suggests something harder to address than specific failures: the gap between stated values and market pressures has grown wide enough that a senior safety researcher couldn’t reconcile them.
When your head of safeguards research quits citing “pressures to set aside what matters most” while declining to name specific failures, you’re facing the reality that market forces may be incompatible with the level of caution you claim to value.
Mind the Gap
Anthropic’s advocacy for AI guardrails has drawn fire from those who want Washington to stay out.
Both characterizations may be accurate. Anthropic likely believes regulation serves everyone while recognizing it serves them particularly well.
When your safety research infrastructure doubles as a competitive moat, when your regulatory advocacy aligns perfectly with your commercial advantages, and when your head of safeguards quits citing pressures to abandon core values . . . well, the gap between principle and profit becomes harder to distinguish.
Is that gap sustainable as valuations approach half a trillion dollars?
