Same Stance, Different Dance
OpenAI and Anthropic held identical positions on military AI. Only one got punished.
Last Saturday morning, something strange happened on millions of phones across the globe. People who had previously downloaded ChatGPT, world’s most beloved chatbot, opened their app stores with the focused energy of people casting a vote. They were uninstalling.
By that evening, ChatGPT’s US mobile uninstalls had surged 295% above baseline.1 On a normal day that number is about 9%. One-star reviews flooded in at 775% above average. Hashtags #CancelChatGPT and #QuitGPT flooded the internet. Around 2.5 million people pledged they’d join the boycott2 against OpenAI. During ChatGPT’s fall from grace, another strange thing was happening in parallel. Claude, Anthropic’s chatbot, hit #1 in the United States3 for the first time in its existence. ChatGPT’s fall and Claude’s rise were of course a reaction to a simple story that, as the public understood it, goes like this: Anthropic refused to help enable killer robots and mass surveillance for Pentagon. OpenAI didn’t. One company had a conscience. The other had a price tag.
Some commentators praised Anthropic’s CEO, Dario Amodei, for choosing “war with the Department of War.”4 Around that time, over 60 OpenAI and 300 Google employees signed an open letter titled We Will Not Be Divided,5 framing the issue as a collective stand against AI being used for mass surveillance and autonomous killing.
It was an emotional story. Also a half-truth.
The Most Hawkish AI Lab
The truthful part is that Anthropic probably IS the most safety-obsessed lab in the Valley, with personal ties to EA. There are many ways to define Effective Altruism or EA, but for the purposes of this essay, let’s focus on its role in the AI industry. This is a polarizing philosophical movement that focuses on AI’s hypothetical ability to end the world as we know it UNLESS we prepare. Anthropic’s founders, Dario and Daniela Amodei have publicly distanced themselves6 from the movement but what complicates things is Daniela’s marriage to Holden Karnofsky, one of most prominent EA researchers. He also happens to sit7 on Anthropic’s board at the moment. With all of this in mind, it’s not hard to see why Anthropic got nicknamed “doomer safety lab.”
One might think that a company with such awareness of existential risks posed by AI would stick to “safe” stuff. Developing products like image, video or music generating chatbots for the average Joe. But that couldn’t be further from true. Apart from being the most safety-obsessed lab in the Valley they also happen to be the most hawkish one too.
Anthropic partnered with Palantir8 in July 2024 to bring Claude into US military. This made them the first AI company to deploy on classified military networks. For comparison, that same month OpenAI was focusing on releasing GPT-4o Mini9, their cheapest model yet. The lab has been known for its strong anti-China stance. Yao Shunyu, a researcher who had worked on Claude’s development and held a PhD from Stanford, resigned and moved to Google DeepMind10. He wrote that roughly 40% of his reason for leaving was Anthropic’s anti-China stance — and that he believed most employees disagreed with the language but the company had made it official policy. Anthropic had spent over $1 million lobbying in a single quarter11 on AI export controls, pressing for stricter chip restrictions. At Davos in January 2026, Amodei called the Trump administration’s decision to allow Nvidia chip sales to China “crazy” and compared it to selling nuclear weapons to North Korea12.
What could make Anthropic, most national defense focused company that’s so eager to “defeat China in this technology” clash with Pentagon?
Escalation
Short answer: Pentagon’s mysterious change of heart. Anthropic’s contract with the US military, signed in July 2024 was extended13 in summer of 2025. However, sometime afterwards, Department of War drafted a different contract. One that touches upon two issues that Dario Amodei just couldn’t swallow: Mass surveillance of US citizens and fully autonomous weapons.
On fully autonomous weapons, which frequently inspired dystopian TV shows like Black Mirror14 due to how inherently scary they are, Amodei explicitly said they “may prove critical to U.S. national defense.”15 Basically, he very much supports them. His issue is that the tech isn’t ready yet. He told the Pentagon that today’s frontier AI systems are simply not reliable enough to power fully autonomous weapons16 — and offered to work with them on R&D to make them reliable enough. Dario is not saying “not never“. He’s just saying “not yet“.
Dario’s stance on mass surveillance sounds really ethical until you realize that every single Anthropic statement specifies “mass domestic surveillance” and “US persons.” Amodei explicitly wrote that Anthropic supports the use of AI for lawful foreign intelligence and counterintelligence missions. Basically, collecting and analyzing personal data like geolocation, browsing histories, financial records of non-Americans? Not a problem! The distinction isn’t between surveillance and no surveillance. It’s between surveilling your own citizens and surveilling everyone else.
And even Dario’s worry for privacy of US citizens feels futile thanks to a convenient loophole, which makes mass surveillance of Americans easy and, most importantly, legal. Under Fourth Amendment, if FBI wants to know your location, it needs a warrant.17 However, that same personal data like your location or financial statement is constantly collected by your phone apps, car or fitness tracker. This data is then sold companies called “data brokers” who in turn package this up and sell it to anyone willing to pay. Advertisers, mostly. But also government agencies. So the government can buy the EXACT same information it would need a warrant to demand — and it’s perfectly legal under current law.18 Before AI, “commercially available data” was just a massive, unstructured pile. Useful in theory, nearly impossible to analyze at scale. But with AI new possibilities for mass analysis emerged. AI can aggregate the data, cross-reference it, and build detailed surveillance profiles of millions of people at once. As Amodei told CBS: “That actually isn’t illegal. It was just never useful before the era of AI.”19 The technology outpaced the law. Congress hasn’t caught up. And this opened up a gray area that no one — not Congress, not the courts, not the companies — has resolved.
After what seemed like a long few weeks of painful but mostly private negotiations between Anthropic and Department of War, Anthropic was given a deadline, Friday February 27, 2026 to agree to Department’s new contract which specified that their software will be used for “all lawful purposes”. One day before the deadline, on Thursday, Dario’s response appeared on Anthropic’s blog: “we cannot in good conscience accede to their request.”20
For a few hours it looked like US military lost their crucial partner and Anthropic lost hundreds of millions of dollars. Tensions behind closed doors spilled into public life overnight. Trump called Anthropic21 “The Leftwing nut jobs”. Emil Michael, the Pentagon’s Under Secretary of War, decorated Amodei with a spicy description: a “liar” with a “God complex.”22 But the other side wasn’t emotionless either. In a leaked internal memo to Anthropic staff, Amodei said Trump disliked Anthropic in part because the company didn’t give him “dictator-style praise.”23
Secretary of War, Pete Hegseth threatened to designate Anthropic as Supply-Chain Risk to National Security.24 If you never heard of this “title” before, it’s because it rarely happens. Huawei being a notable exception.
At the end of the turbulent week, Anthropic gained sympathies from consumers across the world. Katy Perry posted her freshly acquired Anthropic subscription25 to her 85+ million followers. But at the same time Anthropic lost a lot. They lost around 200 million dollars in contracts and an opportunity to work on a cause that matters to Dario — national defense.
The Loophole
But there might be a practical reason for Amodei’s refusal to collaborate with Pentagon that has nothing to do with ethics and morals. This particular scenario — a government institution putting a private company in a legal gray area has already happened before. And although a company might benefit short-term, in the long run things can backfire.
9/11 was probably one of the most traumatic events in American history. The post-9/11 US was a shocked and traumatized country, ready to do whatever it takes to prevent anything similar from happening again. So when NSA asked all major telecoms to hand over customer data without warrants26 AT&T, Verizon and BellSouth said yes. AT&T even built a secret room in its San Francisco offices to route internet traffic directly to the NSA. But not everyone thought that strong fear justifies skipping courts and justice system. Qwest’s CEO, Joseph Nacchio “refused” to hand over data27 in the same way that Dario Amodei is “refusing” Pentagon’s request today. Nacchio demanded a court order first before handing over any data. He paid a huge price for his defiance and spent four and a half years in prison.
When in 2013 Snowden’s revelations proved the surveillance program was indeed illegal, Nacchio was vindicated. What happened to AT&T, Verizon and BellSouth, the compliant telecoms? They had to face over 50 lawsuits. Congress had to pass emergency retroactive immunity28 to keep them from going bankrupt. Amodei doesn’t need to be a moral philosopher to see where “all lawful purposes” could lead after one whistleblower or administration change. The company that complies today gets sued tomorrow. Dario might just be looking at a ticking legal bomb and refusing to go near it.
While he might not be the biggest risk taker in the Valley, there’s someone who’s far more comfortable with risk.
All Lawful Purposes
Sam Altman had a couple of rough months. In December ‘25 he hit code red29 after his competitors, Google and Anthropic, released superior models to ChatGPT. Altman has long had a reputation for using a lot of words to say very little — carefully measured, always on message. The stuff of politicians. That reputation was dethroned in February ‘26 when, in a panicky attempt to defend the rising cost of training AI models, he clumsily told the public that a child is a ‘less efficient’ system30 to train than an AI model.
Surprisingly, on Friday morning, as US government officials were deeply emotionally invested in berating Anthropic, Sam Altman was full of praise for Dario’s company. He expressed with his signature Valley vocal fry how glad he was that “Anthropic has supported American warfighters”.31 By the evening, it was announced that OpenAI signed the deal with DoW.32Was it THE DEAL that made Dario walk away? Or a better type of deal?
Initially it was really hard to tell. And Sam’s ability to use a lot of words without saying anything didn’t help. Sam claimed that his red lines on the Pentagon deal were identical to Amodei’s. No domestic mass surveillance. No autonomous weapons. Humans in the loop. He said this in an employee memo33 before the deal, in the announcement itself, and in his X AMA afterward.34 He even added a third line Amodei hadn’t: no AI for high-stakes automated decisions like social credit systems. At OpenAI’s all-hands, company leaders acknowledged that governments will spy internationally and that national security officers “can’t do their jobs” without international surveillance capabilities.35
Basically same customer. Same red lines. Same comfort with foreign intelligence and future weapons development.
But soon three familiar words emerged in all the reports. All lawful purposes.
The contract language OpenAI accepted read: “The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” Anthropic rejected this exact phrase36 because “all lawful purposes” includes things like previously mentioned commercially available data — legal for purchase and analysis. But that Anthropic considers surveillance. OpenAI accepted the phrase.
As it turned out, Sam was about to experience the wrath of the public. In a leaked transcript37, reported by CNBC, WSJ, and others, Sam was quoted telling his workers that “maybe you think the Iran strike was good and the Venezuela invasion was bad… you don’t get to weigh in on that.”38 After intense backlash on X, Altman retroactively added39 a few things to the contract. “The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals,”40 including via “commercially acquired personal or identifiable information.” Which is essentially what Anthropic had been demanding from the start. He admitted the original deal was “opportunistic and sloppy.”41
So both companies arrived at the same destination. One got there by following its principles. The other got there after a 295% uninstall surge forced a rewrite.
All things considered, one has to wonder, WHY was Pentagon willing to negotiate with OpenAI but not with Anthropic? The answer might not be in what’s written in the contract. It might be more feudal than legal. It’s all about loyalties and allegiances.
While Dario was busy lobbying for stricter chip restrictions42 on China and publishing blog posts43 about AI safety, Sam Altman was building a very different kind of relationship with the White House. In January 2025, Altman stood beside Trump at the Stargate announcement44 — a $500 billion AI infrastructure project — and told the president: “For AGI to get built here, to create hundreds of thousands of jobs, to create a new industry centered here, we wouldn’t be able to do this without you, Mr. President.” Altman personally donated a million dollars45 to Trump’s inauguration. His co-founder Greg Brockman and his wife gave $25 million to the MAGA Inc. super PAC46 and co-funded a PAC that opposes AI safety legislation. In September 2025, when Trump hosted 33 Silicon Valley leaders — 13 of them billionaires47 — for a White House dinner in the Rose Garden, Altman was there, thanking Trump for being a “pro-business, pro-innovation president.” Amodei was not. Anthropic had instead donated $20 million to a group campaigning48 for more AI regulation.
The Pentagon didn’t reject a position. It rejected a relationship. And consumers didn’t choose between ethics and complicity. They chose between two brands that had packaged the same complicity differently.
Unelected Leaders
In spite of Amodei’s obvious desire to advance national security once the law catches up, many felt like his behavior in the last few weeks has weakened American democracy instead of strengthening it. Logic is simple: Dario isn’t an elected leader, he’s a cofounder of a private company. YET, he was pulling the strings, conditioning and commanding DoW, something he has no right to do. Ben Thompson put it precisely in Stratechery:49 either Anthropic accepts a subservient position relative to the US government and leaves decision-making to elected officials, or the government destroys it. Palmer Luckey, the Anduril founder, put it more bluntly:50 “Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives?”
A CEO dictating how the military operates is pretty anti-democratic. But it’s not the first nor the most extreme example of a public company influencing government agency. As a matter of fact, CEOs of private companies regularly affect who wins elections through a legal, absurd mechanism — by donating money for lobbying. Elon Musk even publicly claimed credit for electing the current president.51 Musk spent over $250 million on the 2024 election52 — making him Trump’s single largest donor. Peter Thiel and Alex Karp from Palantir sued the government and won53, forcing the Army to consider commercial software instead of building its own.

Elon Musk, Palantir and Anthropic all point to an increasingly blurred line between government and private power. Yet, between the three companies, Dario’s stance is the most mellow. If one had to put his stance in one sentence, it would be something along the lines of: “Congress hasn’t legislated on this, and until they do, I’m not comfortable handing over a tool that exploits that vacuum.” He explicitly called on Congress to pass guardrails. He acknowledged Congress might not move fast enough. Alan Rozenshtein, a law professor at the University of Minnesota, made the same point in Lawfare54: if Congress had legislated guidelines on autonomous weapons and surveillance, this whole fight probably never happens.
Olive Branch
The most popular version of this story goes like this: one CEO is a hero and the other is the villain. That version is easier to tell. It’s the one that went viral. It’s the one that moved markets and filled comment sections and made people feel, briefly, that their consumer choices were moral choices — that uninstalling an app was a form of resistance.
The harder version — the one you’re left with when you strip the performance — is that two companies with very similar positions on military surveillance and autonomous weapons managed to produce opposite emotional reactions from millions of people. One became the face of corporate conscience; the other became the symbol of corporate betrayal. And the difference between them wasn’t ethics. It was contract language, political donations, and which CEO had $25 million in the president’s PAC.
Regardless, the performance worked. Not because one company was ant-war and the other one was pro-war. Not because consumers understood what they were protesting — most had no idea the fight was about three words in a contract clause. But because 2.5 million people pledging to boycott, a 295% uninstall surge, and Katy Perry posting her Claude subscription created the one thing that Pentagon negotiations couldn’t: leverage. Altman updated his contract. Stronger protections were introduced. OpenAI and Google engineers united.
Pentagon and Anthropic went back to negotiation55 after the long weekend. Dario apologized for his angry tone in the leaked memo. He emphasizes that he feels and thinks differently today. After short mentions of not giving up on their “two narrow exceptions” and taking DoW to the court over supply chain risk designation, today’s Dario is dancing to a different tune. He is deescalating and offering an olive branch. Models will be provided to the US army at a symbolic price in order to make sure “warfighters and national security experts are not deprived” (and that military stays reliant on Anthropic’s software). Today’s dance is about how “Anthropic has much more in common with the Department of War than we have differences.” Same stance, different dance.
Sources:
TechCrunch, "ChatGPT uninstalls surged by 295% after DoD deal," Mar 2, 2026.
Implicator AI, "ChatGPT Uninstalls Jump 295%," Mar 3, 2026.
Vice, “Anthropic’s Claude Hits No. 1 in the App Store as Users Ditch ChatGPT Amid Recent Backlash,” Mar 4, 2026.
Anthropic, "Statement from Dario Amodei on our discussions with the Department of War," Feb 26, 2026.
Engadget, "Google and OpenAI employees sign open letter in solidarity with Anthropic," Feb 27, 2026.
EA Forum, "Anthropic is not being consistently candid about their connection to EA," Mar 2025; Wired, Anthropic profile, 2025.
Fortune, “Anthropic hired president Daniela Amodei’s husband to work on the company’s AI safety strategy”, Feb 13, 2025.
Palantir, “Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations,” Nov 7, 2024.
OpenAI, “GPT‑4o mini: advancing cost-efficient intelligence,” July 18, 2024.
Elephas, "AI Researcher Quits Anthropic Over China Stance," Oct 2025.
Legis1, "Anthropic Spends $1M on AI Lobbying for Export Controls," Oct 2025.
SiliconANGLE, "Anthropic boss Dario Amodei slams Trump 'crazy' decision to sell advanced AI chips to China," Jan 20, 2026.
Anthropic, “Anthropic and the Department of Defense to advance responsible AI in defense operations”, Jul 14, 2025.
Vox, “Black Mirror season 3, episode 6: “Hated in the Nation” has one true villain,” Oct 30, 2016
Youtube, CBS Sunday Morning, AI company Anthropic's Dario Amodei: "We are patriots", Mar 2026.
CBS News, "Pentagon official lashes out at Anthropic," Feb 26, 2026.
Supreme Court, “CARPENTER v. UNITED STATES,” Oct 2017.
Brennan Center for Justice, "Closing the Data Broker Loophole," 2024.
The Decoder, "The Pentagon-OpenAI-Anthropic fallout comes down to three words," Mar 1, 2026.
Anthropic, “Statement from Dario Amodei on our discussions with the Department of War,” Feb 27, 2026.
Truth Social, Donald Trump, Feb 2026.
The New York Times, “How Talks Between Anthropic and Defense Dept. Fell Apart,” Mar 2026.
The Information, “Anthropic CEO: Trump Disliked Company For Not Giving ‘Dictator-Style Praise’,” Mar 2026.
Axios, “‘Pentagon blacklists Anthropic, labels AI company “supply chain risk”,” Feb 27, 2026.
x @katyperry, Feb 27, 2026.
USA TODAY, “NSA has massive database of Americans’ phone calls,” May 11, 2006.
Wikipedia, "Joseph Nacchio."
Wikipedia, "FISA Amendments Act of 2008."
The Wall Street Journal, “OpenAI Declares ‘Code Red’ as Google Threatens AI Lead,” Dec 2, 2025.
Futurism. “Sam Altman Fumes That It Takes Longer to Train a Human Than an AI, Plus They Eat All That Wasteful Food,” Feb 23, 2026.
CNBC, "Anthropic faces lose-lose scenario in Pentagon conflict as deadline looms," Feb 27, 2026.
OpenAI, "Our agreement with the Department of War," Feb 28, 2026.
The Wall Street Journal, “OpenAI CEO Sam Altman Defends Pentagon Work to Staff, Calls Backlash ‘Really Painful’,” Mar 3, 2026.
X @sama, Feb 28, 2026.
Fortune, “OpenAI strikes a deal with the Pentagon just hours after Trump orders the end of Anthropic contracts, and hours after a staff all-hands,” Feb 27, 2026.
The Decoder, "The Pentagon-OpenAI-Anthropic fallout comes down to three words," Mar 1, 2026.
Cybernews, "Altman admits Pentagon deal looked 'opportunistic,' yet is exploring deal with NATO," Mar 4, 2026; CNBC and WSJ confirmed transcript.
CNBC, “Sam Altman tells OpenAI staffers that military’s ‘operational decisions’ are up to the government,” Mar 3, 2026.
X @sama, Mar 2, 2026
Axios, "OpenAI, Pentagon add more surveillance protections," Mar 3, 2026.
CNBC, "OpenAI's Altman admits defense deal was 'opportunistic and sloppy,'" Mar 3, 2026.
Anthropic, “Securing America’s compute advantage,” Apr 30, 2025.
Dario Amodei, “Machines of Loving Grace,” Oct, 2024.
CBS News, "Trump announces up to $500 billion in private sector AI infrastructure investment," Jan 21, 2025.
NPR, “Tech moguls Altman, Bezos and Zuckerberg donate to Trump’s inauguration fund,” Dec 13, 2024.
Brennan Center for Justice, "Pro-Trump Super PAC Raises Record-Breaking $305 Million," 2026.
Fortune, "Meet all 33 Silicon Valley power players at Trump's high-profile tech dinner," Sep 5, 2025.
Anthropic, “Anthropic is donating $20 million to Public First Action,” Feb 12, 2026.
Stratechery, "Anthropic and Alignment," Mar 3, 2026.
X @PalmerLuckey, Feb 27, 2026.
Unlock Democracy, "Did Elon Musk buy the US election?" Jun 2025.
NBC News, "Elon Musk spent a quarter-billion dollars electing Trump," Dec 6, 2024.
The Hill, “Palantir wins lawsuit over US Army data system,” Oct 31, 2016
Lawfare, "What the Defense Production Act Can and Can't Do to Anthropic," Feb 25, 2026.
Anthropic, "Where things stand with the Department of War," Mar 5, 2026.








