AI companies get comfortable offering their technology to the militarySocial network giant Meta and leading artificial intelligence start-up Anthropic are making it easier for U.S. military and intelligence to tap their algorithms.
https://www.washingtonpost.com/technology/2024/11/08/anthropic-meta-pentagon-military-openai/Artificial intelligence companies that have previously been reticent to allow military use of their technology are shifting policies and striking deals to offer it to spy agencies and the Pentagon.
On Thursday, Anthropic, a leading AI start-up that has raised billions of dollars in funding and competes with ChatGPT developer OpenAI, announced it would sell its AI to U.S. military and intelligence customers through a deal with Amazon’s cloud business and government software maker Palantir.
On Monday, Meta changed its policies to allow military use of its free, open-source AI technology Llama, which competes with technology offered by OpenAI and Anthropic. The same day, OpenAI announced a deal to sell ChatGPT to the Air Force, after changing its policies earlier this year to allow some military uses of its software.
The deals and policy changes underscore a broad shift that has seen tech companies work more closely with the Pentagon, despite some employees protesting their work contributing to military applications.
Anthropic changed its policies in June to allow some intelligence agency applications of its technology but still bans customers from using it for weapons or domestic surveillance. OpenAI also prohibits its technology from being used to develop weapons. Anthropic and OpenAI spokespeople did not comment beyond referring to the policies.
Arms-control advocates have long called for an international ban on using AI in weapons. The U.S. military has a policy that humans must maintain meaningful control over weapons technology but has resisted an outright ban, saying that such a prohibition would allow adversaries to gain a technological edge.
Tech leaders and politicians from both major parties have increasingly argued that U.S. tech companies must ramp up the development of defense tech to maintain the nation’s military and technological competitiveness with China.
In a blog post last month, Anthropic CEO Dario Amodei argued that democratic nations should aim to develop the best AI technology to give them a military and commercial edge over authoritarian countries, which he said would probably use AI to abuse human rights.
“If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies,” Amodei said in the blog post.
Anthropic’s backers include Google and Amazon, which has invested $4 billion in the start-up. Amazon founder Jeff Bezos owns The Washington Post.
The U.S. military uses AI for a broad range of purposes, including to predict when to replace parts on aircraft and recognize potential targets on the battlefield.
Palantir, which Anthropic is partnering with to get its technology to government customers, sells AI technology that can automatically detect potential targets from satellite and aerial imagery. Palantir adviser and Donald Trump donor Jacob Helberg watched election returns Tuesday night at the Republican candidate’s official event in Palm Beach, Florida.
The war in Ukraine has triggered a new interest in adapting cheap, commercially available technology such as small drones and satellite internet dishes to military use. A slew of Silicon Valley start-ups have sprung up to try to disrupt the U.S. defense industry and sell new tools to the military.
Military leaders in the United States and around the world expect future battlefield technology to be increasingly independent of human oversight. Although humans are still generally in control of making final decisions about choosing targets and firing weapons, arms-control advocates and AI researchers worry that the increased use of AI could lead to poor decision-making or lethal errors and violate international laws.
Google, Microsoft and Amazon compete fiercely for military cloud-computing contracts, but some tech employees have pushed back on such work.
In 2018, Google said it would not renew a Pentagon contract to provide analysis of drone imagery, amid public backlash and employee protests. The company has continued to expand its military contracts, even as it has faced some persistent resistance.
This year Amazon and Google faced employee protests over Israeli government contracts amid concerns that the work could assist the military. In August, a group of workers at Google’s AI division, DeepMind, signed a letter asking the company to ensure that it was not selling AI to militaries and to terminate any contracts if it was, according to a copy obtained by The Post.
OpenAI and Anthropic, part of a newer generation of AI developers, have embraced military and intelligence work relatively early in their corporate development. Some other companies in the AI boom, such as data provider Scale AI, have made a willingness to work with the military a major focus of their business.