Stats
  • Total Posts: 10261
  • Total Topics: 2988
  • Online Today: 285
  • Online Ever: 816
  • (September 28, 2024, 09:49:53 PM)

WAPO: White House orders Pentagon and intel agencies to increase use of AI

  • 1 Replies
  • 269 Views
*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
White House orders Pentagon and intel agencies to increase use of AI

The Biden administration is under pressure to speed up AI development while also safeguarding against potential risks associated with the technology.

https://www.washingtonpost.com/technology/2024/10/24/white-house-ai-nation-security-memo/





The White House is directing the Pentagon and intelligence agencies to increase their adoption of artificial intelligence, expanding the Biden administration’s efforts to curb technological competition from China and other adversaries.

The edict is part of a landmark national security memorandum published Thursday. It aims to make government agencies step up experiments and deployments of AI.

The memo also bans agencies from using the technology in ways that “do not align with democratic values,” according to a White House news release.

“This is our nation’s first ever strategy for harnessing the power and managing the risks of AI to advance our national security,” national security adviser Jake Sullivan said in a speech Thursday.

Sullivan called the speed of change in AI “breathtaking” and said it had the potential to affect fields ranging from nuclear physics to rocketry and stealth technology.

The White House believes that providing clear rules for using AI will make it easier for government agencies to use the technology, according to a briefing with senior administration officials who spoke on the condition of anonymity to discuss details of the report before its publication.

We must outcompete our adversaries,” said one of the officials. “With a lack of policy clarity and legal clarity of what can and can’t be done, we are likely to see less experimentation.”

Government agencies should not use AI to track Americans’ free speech or get around existing controls on nuclear weapons, the national security memo says.

The United States has a “strong hand” in AI and its companies dominate the field, another of the officials said. Maintaining that lead to avoid a “strategic surprise” from rivals including China is a key government priority, the official said.

The memo is the latest example of the Biden administration trying to respond to concerns about the potential downsides of AI while also encouraging government use of the technology and allowing tech companies in the United States to keep innovating in the field.

Privacy and democracy advocates argue that AI could give Big Tech more power over Americans’ lives and potentially be used by government agencies to undermine civil rights.

“This balance between innovation and responsible use is exactly what the U.S. needs in the AI space,” said Divyansh Kaushik, a vice president at Beacon Global Strategies, a national security advisory firm.

Over the past two years, since OpenAI’s ChatGPT sparked a boom of new interest and investment in AI, regulators and politicians in Washington have scrambled to understand and begin regulating new forms of the technology rapidly being adopted by businesses and governments.

The national security memo was written in response to an expansive executive order on AI signed by President Joe Biden last year. It called for the government to study how to foster AI innovation while also making sure the technology didn’t harm people.

Thursday’s memo directs the government to help U.S. companies defend their AI technology from being stolen by foreign spies, and to continue working on diversifying the supply chain for high-end computer chips crucial to cutting-edge AI projects. Most of those chips are produced in Taiwan.

Military commanders, members of the Biden administration and politicians in Congress have framed AI as an area in which the United States must compete with China if it is to maintain military and economic dominance. The U.S. government has banned the export to the country of certain computer chips that are key to advanced AI programs.

The military has long been an early adopter of some forms of AI, such as image-recognition algorithms that process satellite photos to identify potential targets and cruise missiles that can fly themselves over complex terrain. But military analysts say AI will play an increasingly central role in military competition in the years to come, especially as the U.S. and China vie for influence in the Pacific.

Intelligence analysts still manually sift through huge volumes of data from satellites, human spies and sensors on ships and planes to piece together a picture of potential military threats. AI advocates both outside and within the Pentagon say the technology could synthesize that information much faster and give commanders insights that enable better or quicker decisions on the battlefield.

In the vast Pacific, more sophisticated AI on airborne and oceangoing drones could let them operate more independently, allowing the United States to monitor and control the region more effectively.

China’s tech industry may be officially cut off from the most advanced AI computer chips by U.S. export controls, but tech industry and military leaders say the country isn’t far behind the United States.

Privacy and civil rights advocates have warned that the same AI technology the U.S. military and intelligence establishment use against their adversaries could also be turned against American citizens by their own government.

Police departments across the United States routinely use facial recognition technology in investigations, while seldom disclosing such activity, according to a recent Washington Post investigation. Autonomous drone and robot technology developed in past years with support from the Pentagon is also used by law enforcement agencies.

The memo “unequivocally states” that the government should use AI only in ways that “align with democratic values,” according to a fact sheet about the document provided by a White House spokesperson. The memo also specifically requires agencies to monitor the risk AI systems can pose when it comes to privacy, discrimination and human rights, according to the fact sheet.

Michael Horowitz, a political science professor at the University of Pennsylvania who worked at the Pentagon on AI policy from 2022 until earlier this year, said the policy balances the need to speed up AI adoption with safety concerns. Now it’s up to the government to follow through.

“Implementation will be critical to ensure the reality on the ground matches the ambition of the vision,” Horowitz said.
« Last Edit: October 30, 2024, 09:44:11 PM by Administrator »

*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: WAPO: White House orders Pentagon and intel agencies to increase use of AI
« Reply #1 on: November 27, 2024, 10:51:42 PM »
No Manhattan Project for AI, but maybe a Los Alamos

As with nuclear weapons, general artificial intelligence offers first-mover advantages that could alter the course of history.

https://www.washingtonpost.com/opinions/2024/09/06/general-artificial-intelligence-biden-administration-technology-strategy/

The Biden administration is preparing a formal “national security memorandum” on artificial intelligence that will explore ways for the United States to “preserve and expand U.S. advantages” in AI technologies that could transform science, business and warfare, according to a senior administration official who has reviewed a draft of the memo.

The new approach won’t propose the “Manhattan Project for AI” that some have urged. But it should offer a platform for public-private partnerships and testing that could be likened to a national laboratory, a bit like Lawrence Livermore in California or Los Alamos in New Mexico. For the National Security Council officials drafting the memo, the core idea is to drive AI-linked innovation across the U.S. economy and government, while also anticipating and preventing threats to public safety.

The new strategy will focus on defense and intelligence agencies, aided by the just-created AI Safety Institute at the Commerce Department and its National Institute of Standards and Technology. The Pentagon, the intelligence community and Commerce will work to develop partnerships with the five private companies that dominate AI research, all of them American: Microsoft-backed OpenAI, Google’s DeepMind, Elon Musk’s xAI, Meta AI and the start-up Anthropic.

The memorandum will “provide a framework for responsible use of AI, which will enable faster adoption” in the government and private sector, the administration official said. He said the memorandum should be completed in late September or early October. What’s motivating this effort is the growing possibility of “artificial general intelligence,” or AGI, a hyper-connected future version of the technology that promises superhuman intelligence.

How government should interface with this transformative technology is a preoccupation for policymakers at home and abroad. An early test is an AI safety bill just passed by the California legislature. One of its key provisions requires AI companies to exercise “reasonable care to avoid unreasonable risk” of catastrophes, according to Nathan Calvin, an AI safety lawyer who has been working on the legislation for state Sen. Scott Wiener (D). Silicon Valley tech companies are sharply divided on the bill, and Gov. Gavin Newsom (D) hasn’t said whether he will sign it.

The White House wants to connect future federal oversight of AI with international standard-setting that is already underway. The European Union recently adopted its own AI Act, the world’s first major legislative framework. Britain’s AI Safety Institute helped convene a summit of 28 countries at Bletchley Park in November and a follow-up meeting of the same nations in Seoul in May. China was part of both groups.

AI’s potential is world-enriching and also, possibly, world-destroying. The national security implications received intense study in July at a meeting of the Aspen Strategy Group, a bipartisan gathering of top former government officials, business executives and journalists. At the concluding session, Philip Zelikow, a former State Department official who teaches at Stanford, presented a provocative proposal for reducing risks.

The government’s first obligation is a threat assessment, said Zelikow. It needs to study what catastrophes could be deliberately created by “the worst people and governments in the world using the most advanced possible models.” The government must also assess the havoc that could be wreaked by a “misaligned,” or rogue, AI. Once government agencies have gauged the risks, they will need to think about countermeasures, which will also involve AI, Zelikow argued.

The Manhattan Project analogy is tempting. “This may be a technology where the first-mover advantage would be world-historically decisive,” Zelikow told me. He noted the oft-cited danger that Adolf Hitler might have developed a nuclear bomb first, if America hadn’t raced to build one.

But even if Washington wanted to drive artificial intelligence in the same way it did nuclear research, it probably couldn’t, said Graham Allison, a Harvard Kennedy School professor who has written extensively about AI. Unlike in the 1940s, Allison told me, cutting-edge technologies and cash to fund them are privately held, and the government struggles to keep pace. Public-sector managers are more often obstacles to innovation than enablers of it. Zelikow agrees with Allison’s critique.

Rand, which played a crucial role in early nuclear-weapons strategy, is helping policymakers think about AI risks with a new project called the “Geopolitics of AGI Initiative.” Joel Predd, the project leader, explained to me Thursday that he wants to explore some key uncertainties about whether AGI will emerge gradually or suddenly, and whether the country that obtains it first will have an unbreakable strategic or competitive advantage.

“Our central research question,” he said, “is how the U.S. government should address the deeply uncertain but technically credible potential that world-leading AI labs are on the cusp of developing” AGI.

The debate is just beginning. Already, the tech world is dividing between “doomers,” who think AGI will mean the end of humanity, and “accelerationists,” who see it as “a way to make everything we care about better,” in the words of techno-optimist Marc Andreessen, who co-invented the first internet browser.

Joe Biden is the paradigm of an old-fashioned, analog guy. But in his final months in office, his team is boldly trying to write the rules for a digital technology that will have the capacity to rearrange, for good or ill, every piece of our global mosaic. As Shakespeare’s Miranda exclaims at the end of “The Tempest”: “O brave new world.”