Silicon Valley has a complicated history with the United States Department of Defense, often preferring not to boast about deals it strikes. But under President Donald Trump and Secretary of Defense Pete Hegseth, those arrangements are changing drastically. AI continues to be the biggest buzzword of the 2020s, and Uncle Sam wants some of big tech’s robotic smarts for itself. To that end, the Pentagon has been pursuing deals with some of the biggest players in the AI space — culminating last week in a major defense contract with ChatGPT developer OpenAI. Many watched with concern as Hegseth attempted to extract ethical concessions from competing companies, and with a deal now struck, worry that OpenAI’s capabilities might be used in warfare or to surveil citizens has boiled over.
Backlash is now growing against the former nonprofit, with users around the world expressing outrage at the potential for AI misuse in a military or intelligence context. A growing movement is now calling for a boycott against OpenAI products like ChatGPT, and an organization called QuitGPT is planning a protest at OpenAI’s San Francisco headquarters. Dissenters point to the Pentagon’s insistence on using AI “for all lawful purposes,” as a Pentagon official told Axios, and its refusal to ban the collection of citizens’ information.
OpenAI chief Sam Altman admitted in a March 2 post on X that the deal looked “sloppy and opportunistic,” and claimed his company intends to revise its contract with Uncle Sam to include protections against surveillance, including the use of non-private data. He also added that the DoD had confirmed that agencies like the NSA would not have access to OpenAI’s services without changes to the contract.
ChatGPT uninstalls skyrocket after OpenAI signs Pentagon deal
Drone strikes and mass surveillance may be top of mind for those worried about the military misuse of OpenAI’s artificial intelligence tools, but they are far from the only uses the Pentagon has for the technology. But although there are applications ranging from logistics streamlining to personnel management, the Pentagon’s apparent enthusiasm for unleashing AI’s most destructive capabilities has caused a large drop in user retention. Users have been sharing screenshots of cancelled ChatGPT subscriptions and urged others to move away from the product. A new group called QuitGPT, which advocates switching to ChatGPT alternatives, claims to have seen 2.5 million engagements since setting up shop.
According to market analysis firm Sensor Tower, the ChatGPT app saw a massive day-over-day spike in uninstallations on February 28, the day of the joint American-Israeli assault on Iran. Uninstalls rose by 295%, signaling the broad unpopularity of the joint war effort. According to a March 1 text poll by The Washington Post, anti-war sentiment among the adult American public is polling 13 points above water, with 52% opposed to Saturday’s air strikes.
OpenAI may not have had advance knowledge of the attack, but the U.S. had developed an increasingly threatening posture toward Iran in the weeks preceding the deal, moving some of its largest aircraft carriers — including the U.S.S. Gerald Ford and Abraham Lincoln — within immediate striking distance of the nation. That any company taking the deal would quickly find out where the government’s ethical red lines lie was self-evident.
Anthropic might be the one to benefit from OpenAI’s deal
ChatGPT’s public relations quagmire in the wake of its Department of Defense deal has bolstered Anthropic’s “good guy of AI” reputation. The maker of the popular Claude chatbot had declined to sign a deal with the Pentagon only hours before OpenAI stepped in to scoop up the contract, citing the government’s refusal to include a moratorium on using the AI tools for mass surveillance. The spectre of mass surveillance has haunted the AI sector since ChatGPT launched in late 2022. It is one of the nightmare scenarios current-generation AIs can create, and Anthropic could not extract concessions from the government to alleviate those concerns before a Friday evening deadline on February 27.
Anthropic’s rejection created a surge of positive sentiment among the public, with many praising the AI lab for sticking to its principles. The Sensor Tower analysis, which showed a precipitous drop in ChatGPT app uninstalls after OpenAI’s deal, also tracked a corresponding rise in Claude downloads.
For some, that halo effect was only enhanced by a vitriolic response from the White House, with Secretary Hegseth designating the firm a “supply chain risk,” a designation that blocks the company from dealing with any corner of the U.S. government. However, sources with knowledge of the military’s use of AI confirmed to The Wall Street Journal that Claude had nevertheless been used in Saturday’s barrage of initial strikes on Iran.