Stats
  • Total Posts: 10261
  • Total Topics: 2988
  • Online Today: 285
  • Online Ever: 816
  • (September 28, 2024, 09:49:53 PM)

How AI defeats humans on the battlefield | BBC News

  • 5 Replies
  • 380 Views
*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
How AI defeats humans on the battlefield | BBC News
« on: October 27, 2024, 07:07:17 AM »
How AI defeats humans on the battlefield | BBC News


*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: How AI defeats humans on the battlefield | BBC News
« Reply #1 on: October 27, 2024, 07:08:42 AM »
Who Controls AI-Driven Warfare? | Wider View


*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: How AI defeats humans on the battlefield | BBC News
« Reply #2 on: November 08, 2024, 09:54:18 PM »
AI companies get comfortable offering their technology to the military

Social network giant Meta and leading artificial intelligence start-up Anthropic are making it easier for U.S. military and intelligence to tap their algorithms.

https://www.washingtonpost.com/technology/2024/11/08/anthropic-meta-pentagon-military-openai/




Artificial intelligence companies that have previously been reticent to allow military use of their technology are shifting policies and striking deals to offer it to spy agencies and the Pentagon.

On Thursday, Anthropic, a leading AI start-up that has raised billions of dollars in funding and competes with ChatGPT developer OpenAI, announced it would sell its AI to U.S. military and intelligence customers through a deal with Amazon’s cloud business and government software maker Palantir.

On Monday, Meta changed its policies to allow military use of its free, open-source AI technology Llama, which competes with technology offered by OpenAI and Anthropic. The same day, OpenAI announced a deal to sell ChatGPT to the Air Force, after changing its policies earlier this year to allow some military uses of its software.

The deals and policy changes underscore a broad shift that has seen tech companies work more closely with the Pentagon, despite some employees protesting their work contributing to military applications.

Anthropic changed its policies in June to allow some intelligence agency applications of its technology but still bans customers from using it for weapons or domestic surveillance. OpenAI also prohibits its technology from being used to develop weapons. Anthropic and OpenAI spokespeople did not comment beyond referring to the policies.

Arms-control advocates have long called for an international ban on using AI in weapons. The U.S. military has a policy that humans must maintain meaningful control over weapons technology but has resisted an outright ban, saying that such a prohibition would allow adversaries to gain a technological edge.

Tech leaders and politicians from both major parties have increasingly argued that U.S. tech companies must ramp up the development of defense tech to maintain the nation’s military and technological competitiveness with China.

In a blog post last month, Anthropic CEO Dario Amodei argued that democratic nations should aim to develop the best AI technology to give them a military and commercial edge over authoritarian countries, which he said would probably use AI to abuse human rights.

“If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies,” Amodei said in the blog post.

Anthropic’s backers include Google and Amazon, which has invested $4 billion in the start-up. Amazon founder Jeff Bezos owns The Washington Post.

The U.S. military uses AI for a broad range of purposes, including to predict when to replace parts on aircraft and recognize potential targets on the battlefield.

Palantir, which Anthropic is partnering with to get its technology to government customers, sells AI technology that can automatically detect potential targets from satellite and aerial imagery. Palantir adviser and Donald Trump donor Jacob Helberg watched election returns Tuesday night at the Republican candidate’s official event in Palm Beach, Florida.

The war in Ukraine has triggered a new interest in adapting cheap, commercially available technology such as small drones and satellite internet dishes to military use. A slew of Silicon Valley start-ups have sprung up to try to disrupt the U.S. defense industry and sell new tools to the military.

Military leaders in the United States and around the world expect future battlefield technology to be increasingly independent of human oversight. Although humans are still generally in control of making final decisions about choosing targets and firing weapons, arms-control advocates and AI researchers worry that the increased use of AI could lead to poor decision-making or lethal errors and violate international laws.

Google, Microsoft and Amazon compete fiercely for military cloud-computing contracts, but some tech employees have pushed back on such work.

In 2018, Google said it would not renew a Pentagon contract to provide analysis of drone imagery, amid public backlash and employee protests. The company has continued to expand its military contracts, even as it has faced some persistent resistance.

This year Amazon and Google faced employee protests over Israeli government contracts amid concerns that the work could assist the military. In August, a group of workers at Google’s AI division, DeepMind, signed a letter asking the company to ensure that it was not selling AI to militaries and to terminate any contracts if it was, according to a copy obtained by The Post.

OpenAI and Anthropic, part of a newer generation of AI developers, have embraced military and intelligence work relatively early in their corporate development. Some other companies in the AI boom, such as data provider Scale AI, have made a willingness to work with the military a major focus of their business.
« Last Edit: December 05, 2024, 05:40:29 AM by Administrator »

*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: How AI defeats humans on the battlefield | BBC News
« Reply #3 on: November 14, 2024, 12:24:19 AM »
LOL

New Robot Makes Soldiers Obsolete (Corridor Digital)




This Video ►
There's a new robot in town. You'll see it in the army soon!

Disclaimer ►
This video is a comedic parody and is not owned, endorsed, created by, or associated with the Boston Dynamics company.


Boston Dynamics new robot makes soldiers obsolete !!!




The Dystopian Future of AI Warfare




The Pentagon's Replicator initiative is an ambitious plan to produce a new generation of weapons driven by artificial intelligence.

The goal is to develop weapons systems at cheap cost and in large quantities, which can be replaced in short order if lost in combat.

In short, a new AI revolution that will transform the way we conduct warfare.

But as Quincy Institute Senior Fellow Bill Hartung explains, this is just the latest example of how faith in technology can generate false hope, that it can bestow a decisive advantage in warfare, when all the historical evidence shows the opposite.

Video produced and edited by Steve McMaster.


How militaries are using artificial intelligence on and off the battlefield


« Last Edit: November 14, 2024, 03:22:41 AM by Administrator »

*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: How AI defeats humans on the battlefield | BBC News
« Reply #4 on: December 05, 2024, 05:35:06 AM »
MORE AI companies get comfortable offering their technology to the military


OpenAI partners with weapons start-up Anduril on military AI

The defense company will add artificial intelligence technology from the ChatGPT maker to its anti-drone products.

https://www.washingtonpost.com/technology/2024/12/04/openai-anduril-military-ai/

SAN FRANCISCO — ChatGPT creator OpenAI and high-tech military manufacturer Anduril Industries will codevelop new artificial-intelligence technology for the Pentagon, adding to a trend for leading tech companies to take on military projects.

The partnership will bring together OpenAI’s AI capabilities, among the most advanced in the industry, with Anduril’s drones, detection units and military software, the two companies said in a joint statement Wednesday. They declined to share any financial details about the terms of their partnership.

The deal aims to improve Anduril technology used to detect and shoot down drones that threaten American forces and those of allies, the statement said — tools the Pentagon buys from the military start-up to help counter the proliferation of cheap drones on battlefields all over the world.

After an Iranian-made drone killed three U.S. service members at a base in Jordan this year, an assessment by the military found that the drone probably had not been detected and that no weapon existed on the base to destroy it.

Anduril sells sensor towers, electronic warfare communications-jammers and drones that are meant to shoot down enemy drones or missiles, and offers software called Lattice designed to help soldiers watch over the battlefield and control multiple drones and sensors at once.

The OpenAI-Anduril deal is the latest in a string of recent announcements from tech companies about stepping up their work with the military. They come as the Pentagon looks to infuse more Silicon Valley innovation into weaponry to arm U.S. forces and allies with more potent, plentiful and affordable technology.

In November, OpenAI competitor Anthropic, developer of the chatbot Claude, said it would partner with Amazon and government software provider Palantir to sell its AI algorithms to the military. The same month, Facebook owner Meta changed its policies to allow the military to use its open source AI technology.

OpenAI barred its own products from being used for any military application until earlier this year, when it changed its policies to allow some military uses. Despite the new partnership, the company says its technology may still not be used to develop weapons, or to harm people or property.

Liz Bourgeois, a spokesperson for OpenAI, said the partnership complies with the company’s rules because it is narrowly focused on systems that defend from unmanned aerial threats. The deal doesn’t cover other use cases, Bourgeois said.

Just a few years ago, many Silicon Valley leaders were uninterested in dealing with the military. It was seen as a hidebound and unprofitable customer incompatible with the fast-moving industry, and some tech workers protested defense contracts.

Google in 2018 was pressured into declining to renew a deal to sell image-recognition technology to the Pentagon.

Silicon Valley leaders who take a pragmatic approach to the industry’s role in society and continued Pentagon efforts to woo tech firms have recently made military deals more common.

The impact of technology such as image-recognition software and cheap drones on battlefields in Ukraine and Gaza as well as China’s rising technological prowess has inspired some young start-up founders to build companies focused on weapons and defense rather than social media or e-commerce apps like the generation before them.

Donald Trump’s reelection last month has founders and investors among those companies anticipating a surge of new support from the U.S. government in the form of new contracts and loosened regulation. The new wave of military-friendly techies frame themselves as patriots trying to rejuvenate American manufacturing and help the country cement its superpower status.

“The accelerating race between the United States and China in advancing AI makes this a pivotal moment,” OpenAI and Anduril said in a joint statement on their new partnership. “If the United States cedes ground, we risk losing the technological edge that has underpinned our national security for decades.”

Not everyone in the tech industry is ready to embrace military work.

A group of Google employees was fired this year after protesting the company’s contract to sell software to the Israeli government. Prominent AI researchers have joined arms-control advocates to push for a preemptive ban on AI-enabled weapons, out of concern machines will eventually become able to independently decide to kill humans.
« Last Edit: December 05, 2024, 05:40:50 AM by Administrator »

*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: How AI defeats humans on the battlefield | BBC News
« Reply #5 on: December 08, 2024, 06:06:57 PM »
OpenAI employees question the ethics of military deal with startup Anduril

Internal discussions showed some workers expressed discomfort with the company’s artificial intelligence technology being used by a weapons maker.

https://www.washingtonpost.com/technology/2024/12/06/openai-anduril-employee-military-ai/

SAN FRANCISCO — Hours after ChatGPT-maker OpenAI announced a partnership with weapons developer Anduril on Wednesday, some employees raised ethical concerns about the prospect of artificial intelligence technology they helped develop being put to military use.

On an internal company discussion forum, employees pushed back on the deal and asked for more transparency from leaders, messages viewed by The Washington Post show.

OpenAI has said its work with Anduril will be limited to using AI to enhance systems the defense company sells the Pentagon to defend U.S. soldiers from drone attacks. Employees at the AI developer asked in internal messages how OpenAI could ensure Anduril systems aided by its technology wouldn’t also be directed against human-piloted aircraft, or stop the U.S. military from deploying them in other ways.

One OpenAI worker said the company appeared to be trying to downplay the clear implications of doing business with a weapons manufacturer, the messages showed. Another said that they were concerned the deal would hurt OpenAI’s reputation, according to the messages.

A third said that defensive use cases still represented militarization of AI, and noted that the fictional AI system Skynet, which turns on humanity in the Terminator movies, was also originally designed to defend against aerial attacks on North America.

OpenAI executives quickly acknowledged the concerns, messages seen by The Post show, while also writing that the company’s work with Anduril is limited to defensive systems intended to save American lives. Other OpenAI employees in the forum said that they supported the deal and were thankful the company supported internal discussion on the topic.

“We are proud to help keep safe the people who risk their lives to keep our families and our country safe,” OpenAI CEO Sam Altman said in a statement.

Anduril CEO Brian Schimpf said in a statement that the companies were “addressing critical capability gaps to protect U.S. and allied forces from emerging aerial threats, ensuring service members have the tools they need to stay safe in an evolving threat landscape.”

The debate inside OpenAI comes after the ChatGPT maker and other leading AI developers including Anthropic and Meta changed their policies to allow military use of their technology.

Existing AI technology still lags far behind Hollywood depictions but OpenAI’s leaders have been vocal about the potential risks of its algorithms being used in unforeseen ways. A company report issued alongside an upgrade to ChatGPT this week warned that making AI more capable has the side effect of “increasing potential risks that stem from heightened intelligence.”

The company has invested heavily in safety testing, and said that the Anduril project was vetted by its policy team. OpenAI has held feedback sessions with employees on its national security work in the past few months, and plans to hold more, Liz Bourgeois, an OpenAI spokesperson said.

In the internal discussions seen by The Post, the executives stated that it was important for OpenAI to provide the best technology available to militaries run by democratically-elected governments, and that authoritarian governments would not hold back from using AI for military uses. Some workers countered that the United States has sold weapons to authoritarian allies.

By taking on military projects, OpenAI could help the U.S. government understand AI technology better and prepare to defend against its use by potential adversaries, executives also said.

Silicon Valley companies are increasingly becoming more comfortable selling to the military, a major shift from 2018 when Google declined to renew a contract to sell image-recognition tech to the Pentagon after employee protests.

Google, Amazon, Microsoft and Oracle are all part of a multibillion dollar contract to provide cloud services and software to the Pentagon. Google fired a group of employees earlier this year who protested against its work with the Israeli government over concerns about how its military would use the company’s technology.

Anduril is part of a wave of companies that also includes Palantir and start-ups like Shield AI that has sprung up to arm the U.S. military with AI and other cutting-edge technology. They have challenged conventional defense contractors, selling directly to the military and framing themselves as patriotic supporters of U.S. military dominance.

Analysts and investors predict defense tech upstarts may thrive under the incoming Trump administration because it appears willing to disrupt the way the Pentagon does business.

OpenAI and rival elite AI research labs have generally positioned their technology as having the potential to help all people, improving economic productivity and leading to breakthroughs in education and medicine. The dissent inside the company suggests that not all its employees are ready to see their work folded into military projects.

ChatGPT’s developer was founded as a nonprofit dedicated to ensuring that AI benefits all of humanity before later starting a commercial division and taking on billions in funding from Microsoft and others. For years the company prohibited its technology from being used by the military.

In January, OpenAI revised its policies, saying it would allow some military uses, such as helping veterans find information on health benefits. Use of its technology to develop weapons and harm people or property remains forbidden, the company says.

In June, the ChatGPT developer added Paul M. Nakasone, a retired four-star Army general and former director of the National Security Agency, to the nonprofit board of directors that is still pledged to OpenAI’s original mission. The company has also hired staff to work on national security policy.