Stats
  • Total Posts: 9599
  • Total Topics: 2589
  • Online Today: 331
  • Online Ever: 816
  • (September 28, 2024, 09:49:53 PM)

WAPO: Regulators reflexively try to strangle disruptive tech. Now it’s AI’s turn

  • 0 Replies
  • 25 Views
*

Offline Administrator

  • *****
  • 3767
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Opinion : Regulators reflexively try to strangle disruptive tech. Now it’s AI’s turn.

https://www.washingtonpost.com/opinions/2024/10/25/biden-harris-artificial-intelligence-regulation/




President Joe Biden signs an executive order on artificial intelligence at the White House in 2023. (Evan Vucci/AP)


At its best, normal, as opposed to artificial, intelligence is characterized by “epistemic humility”: modesty about what can be known, especially about a free society’s future. Such humility is scarce in the political class, which prospers by promising to shape, by foreseeing, the future. Progressives, especially, envision government peering over the horizon to plan a complex society shorn of unintended consequences of unexpected developments.

Sign up for Shifts, an illustrated newsletter series about the future of work
Last November, Vice President Kamala Harris gave “remarks” in London about the future of artificial intelligence. At a Global Summit on AI Safety, she stressed the “risk” of AI, repeatedly using the words “safe” and “threat” or permutations of them as she warned that AI has “the potential” to cause “profound” harm as well as good.

She said that “when people around the world cannot discern fact from fiction because of a flood of AI-enabled mis- and disinformation,” this threat (herewith the obligatory word of progressive angst) is “existential for democracy.” So, “we” must address “the full spectrum of AI risk” to ensure that AI is “truly safe.” The antecedent of the pronoun “we” is governments. They, Harris said, should guarantee that AI’s benefits “are shared equitably” and “advance human rights and human dignity.”

Of course, AI requires regulations to protect us: Everything new expands government’s remit to enforce safety and equity, and save democracy, which government thinks is as fragile as Limoges porcelain. Apocalypse now — or by Tuesday at the latest. So, government must more robustly regulate.

Economist John Cochrane of the Hoover Institution, who blogs as the Grumpy Economist, dissents, asking: From Johannes Gutenberg’s 15th-century invention of movable type to X in the 21st, have the chattering classes ever chattered accurately about a new technology’s consequences? Thomas Robert Malthus was wrong about late-18th-century technologies leading to mass starvation; Karl Marx was mistaken about industrialization producing the immiseration of the proletariat.

Today, ban fracking because, well, you never know. Despite no evidence of harm, ban genetically modified foods out of, as is said, an abundance of caution. As Cochrane says, worrywarts often want preemptive regulations to preempt the wrong things: harm from genetically modified corn rather than a pandemic probably due to a human-engineered virus.

Harris illustrates the progressive politician’s itch to have (in Cochrane’s mordant words) “farsighted, dispassionate, and perfectly informed ‘regulators’” impose “safety” measures “before we even know what AI can do, and before dangers reveal themselves.”

Cochrane wonders: Would preemptive “safety” regulators, warily contemplating airplanes in 1910, have been able to anticipate the long experience-based improvement that led to today’s airliners? They might have strangled air travel. Suppose professional worriers, anxious about James Watt’s steam engine or Karl Benz’s automobile (which, in the 1880s, was, in the development of vehicles, about where we are with AI’s development), prompted governments to pass “effect on society and democracy” rules about those infant technologies. Would the technologies have been allowed to mature?

The Biden-Harris administration’s executive order on AI says that “all workers need a seat at the table, including through collective bargaining” so that AI will be “built on the views of workers, labor unions, educators, and employers.” (What about butchers, bakers and candlestick makers?) The administration, in its “Blueprint for an AI Bill of Rights” (which uses “protect” or a version of it 28 times) and elsewhere, insists that AI must advance “racial equity and support for underserved communities,” improve “environmental and social outcomes,” “mitigate climate change risks” and facilitate “an equitable clean energy economy.”

Imagine if today’s risk-averse, equity-obsessed, disruption-phobic, safety-maximizing mentality had existed in 1900, when more than one-third of Americans worked on farms. Government would have leaped to regulate, for safety reasons, the “threat” of a disruptive technology, the tractor, to assuage the anxieties of everyone at “the table.”

That table should be chopped into kindling so that our free society’s dynamism can do what it did in the 1970s and 1980s. Then, Cochrane notes, women poured into the workforce just as word processors and Xerox machines slashed demand for secretaries. Female employment — in better jobs — soared.

Moral panics occur so frequently they are soon forgotten: Who remembers the comic book fright?

In 1948, Americans — there were fewer than 150 million of them — purchased up to 100 million comic books every month. The great and the good pursed their lips. The Senate held a hearing. There was a comic-book burning in Missouri. Comics were too violent, too lubricious. A Comics Code was written, enforced by a Comics Code Authority.

Whew. Crisis averted. Today, however, AI stimulates ever-growing government’s unsleeping determination to protect us from ever-multiplying threats.