How Anthropic’s obsession with AI safety became its secret weapon against OpenAI

Anthropic has emerged as a formidable competitor in the artificial intelligence industry by focusing on enterprise customers and positioning itself as the most safety-conscious AI company. The approach appears to be paying off both commercially and in investor confidence, even as critics question whether the company can maintain its principles while racing to capture market share.

The San Francisco-based startup grew from $1 billion in annualized revenue at the start of 2024 to more than $9 billion by the end of 2025. The company projects annualized revenue will exceed $30 billion by the end of this year. Anthropic now controls 40 percent of the enterprise AI market and is raising $10 billion at a $350 billion valuation.

The company’s success centers on Claude Code, a tool for software engineering that has become an industry leader since its launch. The system can read existing code, plan tasks and execute them, demonstrating early “agentic” capabilities that allow AI models to carry out complex tasks independently. The tool has spawned the term “Claude benders” for marathon coding sessions among developers.

Anthropic’s background

Anthropic was founded in 2021 by seven former OpenAI researchers, including siblings Dario and Daniela Amodei, who serve as CEO and president respectively. The founders left OpenAI over disagreements about how to prepare the world for AI and concerns that the company was not taking safety seriously enough.

The company has cultivated a distinct culture that employees and observers describe as mission-obsessed. Twice a month, Dario Amodei convenes staff for “Dario Vision Quests” where he speaks about building trustworthy AI systems aligned with human values. The company operates with minimal planning cycles, producing no operating plan ahead more than 90 days. Employees describe the work environment as improvisational, with a “yes, and” approach where ideas are welcomed and judged collectively by what they call the “hive mind.”

This culture appears to drive unusual productivity. Anthropic launched more than 30 products and features in January alone, according to the company. The roughly 2,000-person workforce ships products faster than competitors with significantly larger teams. OpenAI has a workforce double the size of Anthropic, while Microsoft and Google have 228,000 and 183,000 staff respectively.

The safety focus has made Claude among the most honest models on the market, according to rankings by Scale.ai researchers. The models are less likely to hallucinate and more likely to admit uncertainty. Anthropic publishes a “Constitution” for Claude detailing how it should behave, recently expanded to 22,000 words addressing issues like emotional dependence and user manipulation.

The company’s policy approach focuses on transparency requirements rather than regulating how AI is developed. Jack Clark, a co-founder and head of policy, argues that governments should mandate reporting about what internal tests reveal, comparing AI companies to factories where regulators care about outputs rather than internal processes.

All seven Anthropic co-founders remain at the company, providing stability that investors cite as an advantage over OpenAI, which has seen eight of its 11 founding team members depart. The company pledged not to introduce advertising into its products, distancing itself from rivals including OpenAI, which has begun trial ads in ChatGPT.

Criticism

However, the company faces criticism for contradictions between its messaging and actions. Anthropic publishes research about dangerous capabilities in Claude, including assisting with bioweapons and demonstrating insider threat behaviors, but continues advancing the models. CEO Dario Amodei warns that AI could eliminate up to 50 percent of entry-level office jobs within one to five years, yet Anthropic’s own products may accelerate job displacement.

Anthropic employees acknowledge they have not seriously considered slowing down AI development. Some researchers told The Atlantic they would prefer the industry move at half speed, but all believed such a slowdown was impossible. The consensus is that competitive pressures and capital markets demand speed.

The company sought investments from the United Arab Emirates and Qatar, countries Amodei described internally as enriching “dictators,” despite public warnings about authoritarian AI. When questioned, Amodei said the company never made a commitment not to seek Middle Eastern funding and that such investors would not control the firm.

Sources: The Atlantic, Bloomberg, Steve Yegge, Financial Times

Article image by TechCrunch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=148412072

About the author

Related posts:

Stay up-to-date:

Advertisement