OpenAI is one step closer to the Pentagon thanks to a partnership with a defense startup
OpenAI has entered into its first major defense partnership, a deal that could see the AI giant enter the Pentagon.
A joint venture was recently announced by the multi-billion dollar Anduril Industries, a defense startup owned by Oculus VR founder Palmer Lucky that sells watch towers, communications jammers, military drones, and autonomous submarines. The “strategic partnership” will integrate OpenAI’s AI models into Anduril systems to “rapidly integrate time-sensitive data, reduce the burden on human operators, and improve situational awareness.” Anduril already provides anti-drone tech to the US government. It was recently selected to develop and test unmanned combat aircraft and was awarded a $100 million contract with the Pentagon’s Office of Chief Digital and AI.
Sora is reportedly posting as part of the ’12 Days of OpenAI’ live marathon
OpenAI is specified in Washington Post that the partnership will only include systems that “defend unmanned aerial threats” (read: find and shoot down drones), especially to avoid the obvious association of its technology with military applications that harm people. Both OpenAI and Anduril say the partnership will keep the US on par with China’s AI development — a repeated goal also evidenced in the US government’s “Manhattan Project” investment in AI and “government efficiency.”
Mashable Light Speed
“OpenAI builds AI to benefit as many people as possible, and supports US-led efforts to ensure that technology upholds democratic values,” wrote OpenAI CEO Sam Altman. “Our partnership with Anduril will help ensure that OpenAI technology protects US military personnel, and will help the national security community understand and use this technology responsibly to keep our citizens safe and free.”
In January, OpenAI quietly removed policy language that prohibited uses of its technology that posed a significant risk of physical harm, including “military and warfare.” An OpenAI spokesperson told Mashable at the time: “Our policy does not allow our tools to be used to harm people, create weapons, monitor communications, or harm others or destroy property. However, there are national security use cases that are consistent with our mission. For example, we are already working DARPA promoting the development of new cybersecurity tools to protect open source software critical infrastructure and the industries that depend on it. It was unclear whether these beneficial use cases would be permitted under ‘military’ in our previous policies.”
Last year, the company was reportedly sending its services in various ways to US military and national offices, backed by a former security chief at software company and government contractor Palantir. And OpenAI isn’t the only AI developer dabbling in military systems. Technology companies Anthropic, makers of Claude, and Palantir recently announced a partnership with Amazon Web Services to sell Anthropic AI models to defense and intelligence agencies, touted as “decision-making” tools for “decentralized environments.”
Recent rumors suggest that President-elect Donald Trump is eyeing Palantir chief technology officer Shyam Shankir for a top engineering and research position at the Pentagon. Shankir has previously criticized the Defense Ministry’s technology acquisition process, saying the government should rely less on large defense contractors and buy “commercially available technology”.