Stats
  • Total Posts: 9597
  • Total Topics: 2588
  • Online Today: 331
  • Online Ever: 816
  • (September 28, 2024, 09:49:53 PM)

How AI defeats humans on the battlefield | BBC News

  • 3 Replies
  • 73 Views
*

Offline Administrator

  • *****
  • 3767
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
How AI defeats humans on the battlefield | BBC News
« on: October 27, 2024, 07:07:17 AM »
How AI defeats humans on the battlefield | BBC News


*

Offline Administrator

  • *****
  • 3767
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: How AI defeats humans on the battlefield | BBC News
« Reply #1 on: October 27, 2024, 07:08:42 AM »
Who Controls AI-Driven Warfare? | Wider View


*

Offline Administrator

  • *****
  • 3767
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: How AI defeats humans on the battlefield | BBC News
« Reply #2 on: November 08, 2024, 09:54:18 PM »
AI companies get comfortable offering their technology to the military

Social network giant Meta and leading artificial intelligence start-up Anthropic are making it easier for U.S. military and intelligence to tap their algorithms.

https://www.washingtonpost.com/technology/2024/11/08/anthropic-meta-pentagon-military-openai/




Artificial intelligence companies that have previously been reticent to allow military use of their technology are shifting policies and striking deals to offer it to spy agencies and the Pentagon.

On Thursday, Anthropic, a leading AI start-up that has raised billions of dollars in funding and competes with ChatGPT developer OpenAI, announced it would sell its AI to U.S. military and intelligence customers through a deal with Amazon’s cloud business and government software maker Palantir.

On Monday, Meta changed its policies to allow military use of its free, open-source AI technology Llama, which competes with technology offered by OpenAI and Anthropic. The same day, OpenAI announced a deal to sell ChatGPT to the Air Force, after changing its policies earlier this year to allow some military uses of its software.

The deals and policy changes underscore a broad shift that has seen tech companies work more closely with the Pentagon, despite some employees protesting their work contributing to military applications.

Anthropic changed its policies in June to allow some intelligence agency applications of its technology but still bans customers from using it for weapons or domestic surveillance. OpenAI also prohibits its technology from being used to develop weapons. Anthropic and OpenAI spokespeople did not comment beyond referring to the policies.

Arms-control advocates have long called for an international ban on using AI in weapons. The U.S. military has a policy that humans must maintain meaningful control over weapons technology but has resisted an outright ban, saying that such a prohibition would allow adversaries to gain a technological edge.

Tech leaders and politicians from both major parties have increasingly argued that U.S. tech companies must ramp up the development of defense tech to maintain the nation’s military and technological competitiveness with China.

In a blog post last month, Anthropic CEO Dario Amodei argued that democratic nations should aim to develop the best AI technology to give them a military and commercial edge over authoritarian countries, which he said would probably use AI to abuse human rights.

“If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies,” Amodei said in the blog post.

Anthropic’s backers include Google and Amazon, which has invested $4 billion in the start-up. Amazon founder Jeff Bezos owns The Washington Post.

The U.S. military uses AI for a broad range of purposes, including to predict when to replace parts on aircraft and recognize potential targets on the battlefield.

Palantir, which Anthropic is partnering with to get its technology to government customers, sells AI technology that can automatically detect potential targets from satellite and aerial imagery. Palantir adviser and Donald Trump donor Jacob Helberg watched election returns Tuesday night at the Republican candidate’s official event in Palm Beach, Florida.

The war in Ukraine has triggered a new interest in adapting cheap, commercially available technology such as small drones and satellite internet dishes to military use. A slew of Silicon Valley start-ups have sprung up to try to disrupt the U.S. defense industry and sell new tools to the military.

Military leaders in the United States and around the world expect future battlefield technology to be increasingly independent of human oversight. Although humans are still generally in control of making final decisions about choosing targets and firing weapons, arms-control advocates and AI researchers worry that the increased use of AI could lead to poor decision-making or lethal errors and violate international laws.

Google, Microsoft and Amazon compete fiercely for military cloud-computing contracts, but some tech employees have pushed back on such work.

In 2018, Google said it would not renew a Pentagon contract to provide analysis of drone imagery, amid public backlash and employee protests. The company has continued to expand its military contracts, even as it has faced some persistent resistance.

This year Amazon and Google faced employee protests over Israeli government contracts amid concerns that the work could assist the military. In August, a group of workers at Google’s AI division, DeepMind, signed a letter asking the company to ensure that it was not selling AI to militaries and to terminate any contracts if it was, according to a copy obtained by The Post.

OpenAI and Anthropic, part of a newer generation of AI developers, have embraced military and intelligence work relatively early in their corporate development. Some other companies in the AI boom, such as data provider Scale AI, have made a willingness to work with the military a major focus of their business.

*

Offline droidrage

  • *****
  • 3797
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: How AI defeats humans on the battlefield | BBC News
« Reply #3 on: November 14, 2024, 12:24:19 AM »
LOL

New Robot Makes Soldiers Obsolete (Corridor Digital)




This Video ►
There's a new robot in town. You'll see it in the army soon!

Disclaimer ►
This video is a comedic parody and is not owned, endorsed, created by, or associated with the Boston Dynamics company.


Boston Dynamics new robot makes soldiers obsolete !!!




The Dystopian Future of AI Warfare




The Pentagon's Replicator initiative is an ambitious plan to produce a new generation of weapons driven by artificial intelligence.

The goal is to develop weapons systems at cheap cost and in large quantities, which can be replaced in short order if lost in combat.

In short, a new AI revolution that will transform the way we conduct warfare.

But as Quincy Institute Senior Fellow Bill Hartung explains, this is just the latest example of how faith in technology can generate false hope, that it can bestow a decisive advantage in warfare, when all the historical evidence shows the opposite.

Video produced and edited by Steve McMaster.


How militaries are using artificial intelligence on and off the battlefield


« Last Edit: November 14, 2024, 03:22:41 AM by Administrator »