Decoding the AI beliefs of Anthropic and its CEO, Dario Amodei

The Defense Department has approved the cutting-edge artificial intelligence technology built by the San Francisco startup Anthropic for use with classified tasks.

But Anthropic, led by its CEO, Dario Amodei, does not want the Pentagon using the technology in certain situations, such as the use of autonomous weapons and domestic surveillance.

Now, the Pentagon and Anthropic are locked in a battle over the future of their contract, which is worth as much as $200 million. The Defense Department may also forbid its contractors from using Anthropic’s technology on government projects.

With several other AI companies hoping to provide similar technology to the Defense Department — including OpenAI, Google and Elon Musk’s xAI — the tussle between Anthropic and the Pentagon could damage Anthropic’s growing business selling AI to big corporate customers.

Here is a guide to Anthropic, Amodei and the company’s AI philosophy.

What is Anthropic?

Anthropic was founded by Amodei and his sister, Daniela Amodei, who worked together at OpenAI. They created their company in early 2021 after a series of disagreements with OpenAI executives over how its AI should be funded, built and released.

They founded Anthropic alongside about 15 other former OpenAI employees who shared their views. The group that left OpenAI to start Anthropic oversaw the creation of OpenAI’s large language models, the technology that eventually powered the chatbot ChatGPT.

Story continues below this ad

At Anthropic, they built a chatbot called Claude. But they did not release the chatbot until after OpenAI touched off the AI boom with the release of ChatGPT in late 2022.

What are Dario Amodei’s views on the dangers of AI?

Amodei has long expressed concern that AI could be used to spread disinformation, power mass surveillance or enable autonomous weapons.

In 2019, while they were still at OpenAI, Amodei and other Anthropic founders were part of a team that announced that they were not releasing an early large language model called GPT-2 because it could be used to spread disinformation. But they eventually changed course.

In a podcast interview in 2023, Amodei said there was a 10% to 25% chance that AI could destroy humanity. But he has since tried to distance himself from that, saying he is not “a doomer.”

Story continues below this ad

In October 2024, Amodei unveiled a more optimistic view of AI, publishing a 14,000-word essay on the potential benefits of the technology.

What are the roots of Amodei’s philosophy?

The Amodeis and many of Anthropic’s co-founders have ties to a community that has long aimed to ensure that AI is built and deployed in a safe way. Many people in this community call themselves Rationalists or effective altruists.

This community believes that AI could eventually find a cure for cancer or solve climate change, but they worry that AI might do things their creators did not intend and cause people serious harm.

Amodei’s husband, Holden Karnofsky, is one of the founders of the effective altruist movement. The Amodeis and Karnofsky once lived in a Silicon Valley group house with many other members of the community, including some of the other founders of Anthropic.

 

But more recently, the Amodeis have said they are not part of the effective altruist community.

What are the politics of Anthropic’s founders?

The Anthropic founders have said that their political views are separate from their views on AI. But Dario Amodei was a vocal supporter of Kamala Harris in 2024 and did not hide his disdain for Donald Trump.

In a recent social media post, Daniela Amodei recently said that the deaths of protesters in Minneapolis “is not what America stands for,” while praising Trump for calling for an investigation.

Dario Amodei had close ties to officials in the Biden administration, including Ben Buchanan, Biden’s AI adviser. Anthropic has since hired Buchanan and Tarun Chhabra, a former National Security Council official for technology.

 

After the Trump administration reversed Biden-era policies that sought to restrict China’s access to computer chips needed to build AI, Amodei criticized the move. He reiterated this stance this year at the World Economic Forum in Davos, Switzerland.

This month, Anthropic named Chris Liddell, a deputy chief of staff in the first Trump administration, to its board.

What was their disagreement with OpenAI?

Amodei and others in his circle were unhappy that OpenAI had tied itself to Microsoft through an agreement that required the startup to share its technologies with the tech giant. They worried OpenAI was moving in a commercial direction that would make it hard to control its technologies.

They also had personal disagreement with two of OpenAI’s founders, its CEO, Sam Altman, and its president, Greg Brockman. Dario Amodei and others went to OpenAI’s board to try to push out Altman. After they failed, they left the company.

 

(The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to AI systems. The two companies have denied those claims.)

Has Anthropic moved in a commercial direction?

Anthropic’s co-founders wanted to build AI with safety guardrails and structured the company as a public benefit corporation, which is aimed at creating public and social good. But they are part of a very commercial race to build AI.

This month, Anthropic raised a new funding round that values the company at $380 billion. After raising more than $57 billion, Anthropic is considering going public on Wall Street over the next 12-18 months.

Do other companies hold similar views on AI?

Google’s AI work is overseen by Demis Hassabis, who joined the company in 2014 when his startup DeepMind was acquired for $650 million. The tech giant’s primary AI lab is called Google DeepMind.

 

When Hassabis and his co-founders joined the company, they demanded two conditions: No DeepMind technology could be used for military purposes, and its most powerful technologies must be overseen by an independent board of technologists and ethicists.

In 2018, during the first Trump administration, Google backed away from a military contract after protests from employees. But it has since agreed to work with the Pentagon, and this work could include autonomous weapons. The DeepMind ethics board was disbanded.

OpenAI has said that would allow the Pentagon to use its technologies for any lawful purposes, but it has said that it will provide its technology to the Defense Department with certain guardrails in place. Musk did not respond to requests for comment on xAI’s approach to military technologies. Google declined to comment.

 

Comments are closed.