OpenAI employees question the ethics of military deal with startup AndurilInternal discussions showed some workers expressed discomfort with the company’s artificial intelligence technology being used by a weapons maker.
https://www.washingtonpost.com/technology/2024/12/06/openai-anduril-employee-military-ai/SAN FRANCISCO — Hours after ChatGPT-maker OpenAI announced a partnership with weapons developer Anduril on Wednesday, some employees raised ethical concerns about the prospect of artificial intelligence technology they helped develop being put to military use.
On an internal company discussion forum, employees pushed back on the deal and asked for more transparency from leaders, messages viewed by The Washington Post show.
OpenAI has said its work with Anduril will be limited to using AI to enhance systems the defense company sells the Pentagon to defend U.S. soldiers from drone attacks. Employees at the AI developer asked in internal messages how OpenAI could ensure Anduril systems aided by its technology wouldn’t also be directed against human-piloted aircraft, or stop the U.S. military from deploying them in other ways.
One OpenAI worker said the company appeared to be trying to downplay the clear implications of doing business with a weapons manufacturer, the messages showed. Another said that they were concerned the deal would hurt OpenAI’s reputation, according to the messages.
A third said that defensive use cases still represented militarization of AI, and noted that the fictional AI system Skynet, which turns on humanity in the Terminator movies, was also originally designed to defend against aerial attacks on North America.
OpenAI executives quickly acknowledged the concerns, messages seen by The Post show, while also writing that the company’s work with Anduril is limited to defensive systems intended to save American lives. Other OpenAI employees in the forum said that they supported the deal and were thankful the company supported internal discussion on the topic.
“We are proud to help keep safe the people who risk their lives to keep our families and our country safe,” OpenAI CEO Sam Altman said in a statement.
Anduril CEO Brian Schimpf said in a statement that the companies were “addressing critical capability gaps to protect U.S. and allied forces from emerging aerial threats, ensuring service members have the tools they need to stay safe in an evolving threat landscape.”
The debate inside OpenAI comes after the ChatGPT maker and other leading AI developers including Anthropic and Meta changed their policies to allow military use of their technology.
Existing AI technology still lags far behind Hollywood depictions but OpenAI’s leaders have been vocal about the potential risks of its algorithms being used in unforeseen ways. A company report issued alongside an upgrade to ChatGPT this week warned that making AI more capable has the side effect of “increasing potential risks that stem from heightened intelligence.”
The company has invested heavily in safety testing, and said that the Anduril project was vetted by its policy team. OpenAI has held feedback sessions with employees on its national security work in the past few months, and plans to hold more, Liz Bourgeois, an OpenAI spokesperson said.
In the internal discussions seen by The Post, the executives stated that it was important for OpenAI to provide the best technology available to militaries run by democratically-elected governments, and that authoritarian governments would not hold back from using AI for military uses. Some workers countered that the United States has sold weapons to authoritarian allies.
By taking on military projects, OpenAI could help the U.S. government understand AI technology better and prepare to defend against its use by potential adversaries, executives also said.
Silicon Valley companies are increasingly becoming more comfortable selling to the military, a major shift from 2018 when Google declined to renew a contract to sell image-recognition tech to the Pentagon after employee protests.
Google, Amazon, Microsoft and Oracle are all part of a multibillion dollar contract to provide cloud services and software to the Pentagon. Google fired a group of employees earlier this year who protested against its work with the Israeli government over concerns about how its military would use the company’s technology.
Anduril is part of a wave of companies that also includes Palantir and start-ups like Shield AI that has sprung up to arm the U.S. military with AI and other cutting-edge technology. They have challenged conventional defense contractors, selling directly to the military and framing themselves as patriotic supporters of U.S. military dominance.
Analysts and investors predict defense tech upstarts may thrive under the incoming Trump administration because it appears willing to disrupt the way the Pentagon does business.
OpenAI and rival elite AI research labs have generally positioned their technology as having the potential to help all people, improving economic productivity and leading to breakthroughs in education and medicine. The dissent inside the company suggests that not all its employees are ready to see their work folded into military projects.
ChatGPT’s developer was founded as a nonprofit dedicated to ensuring that AI benefits all of humanity before later starting a commercial division and taking on billions in funding from Microsoft and others. For years the company prohibited its technology from being used by the military.
In January, OpenAI revised its policies, saying it would allow some military uses, such as helping veterans find information on health benefits. Use of its technology to develop weapons and harm people or property remains forbidden, the company says.
In June, the ChatGPT developer added Paul M. Nakasone, a retired four-star Army general and former director of the National Security Agency, to the nonprofit board of directors that is still pledged to OpenAI’s original mission. The company has also hired staff to work on national security policy.