Stats
  • Total Posts: 10261
  • Total Topics: 2988
  • Online Today: 281
  • Online Ever: 816
  • (September 28, 2024, 09:49:53 PM)

The AI Dilemma 2: AI could cause ‘harm to the world’ --OR-- transform our lives?

  • 4 Replies
  • 238 Views
*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
CEO behind ChatGPT warns Congress AI could cause ‘harm to the world’

https://www.washingtonpost.com/technology/2023/05/16/sam-altman-open-ai-congress-hearing/

In his first congressional testimony, OpenAI CEO Sam Altman called for extensive regulation, including a new government agency charged with licensing AI models



Sen. Richard Blumenthal (D-Conn.), left, chair of the Senate Judiciary subcommittee, greets OpenAI CEO Sam Altman before a hearing on artificial intelligence. (Patrick Semansky/AP)


Sen. Blumenthal's Opening Remarks Created by ChatGPT




OpenAI chief executive Sam Altman delivered a sobering account of ways artificial intelligence could “cause significant harm to the world” during his first congressional testimony, expressing a willingness to work with nervous lawmakers to address the risks presented by his company’s ChatGPT and other AI tools.

Altman advocated for a number of regulations — including a new government agency charged with creating standards for the field — to address mounting concerns that generative AI could distort reality and create unprecedented safety hazards. The CEO tallied “risky” behaviors presented by technology like ChatGPT, including spreading “one-on-one interactive disinformation” and emotional manipulation. At one point he acknowledged AI could be used to target drone strikes.

“If this technology goes wrong, it can go quite wrong,” Altman said.

Yet in nearly three hours of discussion of potentially catastrophic harms, Altman affirmed that his company will continue to release the technology, despite likely dangers. He argued that rather than being reckless, OpenAI’s “iterative deployment” of AI models gives institutions time to understand potential threats — a strategic move that puts “relatively weak” and “deeply imperfect” technology in the world to understand the associated safety risks.

For weeks, Altman has been on a global goodwill tour, privately meeting with policymakers — including the Biden White House and members of Congress — to address apprehension about the rapid rollout of ChatGPT and other technologies. Tuesday’s hearing marked the first opportunity for the broader public to hear his message to policymakers, at a moment when Washington is increasingly grappling with ways to regulate a technology that is already upending jobs, empowering scams and spreading falsehoods.


In sharp contrast to contentious hearings with other tech CEOs, including TikTok’s Shou Zi Chew and Meta’s Mark Zuckerberg, lawmakers from both parties gave Altman a relatively warm reception. They appeared to be in listening mode, expressing a broad willingness to consider regulatory proposals from Altman and the two other witnesses in the hearing, IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus.

Members of the Senate Judiciary subcommittee expressed deep fears about the rapid evolution of artificial intelligence, repeatedly suggesting that recent advances could be more transformative than the internet — or as risky as the atomic bomb.

“This is your chance, folks, to tell us how to get this right,” Sen. John Neely Kennedy (R-La.) told the witnesses. “Please use it.”

Lawmakers from both parties expressed openness to the idea of creating a government agency tasked with regulating artificial intelligence, though past attempts to build a specific agency with oversight of Silicon Valley have languished in Congress amid partisan divisions about how to form such a behemoth.

It’s unclear whether such a proposal would gain broad traction with Republicans, who are generally wary of expanding government power. Sen. Josh Hawley of Missouri, the top Republican on the panel, warned that such a body could be “captured by the interests that they’re supposed to regulate.”


<iframe width='480' height='290' scrolling='no' src='https://www.washingtonpost.com/video/c/embed/610d7226-de96-4161-b1a2-5eda9c96845b' frameborder='0' webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

CEO of OpenAI Sam Altman said in May 16 hearing that interactive disinformation is a cause for concern especially with election year approaching. (Video: The Washington Post, Photo: Reuters/The Washington Post)


Sen. Richard Blumenthal (D-Conn.), who chairs the subcommittee, said Altman’s testimony was a “far cry” from past outings by other top Silicon Valley CEOs, whom lawmakers have criticized for historically declining to endorse specific legislative proposals.

“Sam Altman is night and day compared to other CEOs,” Blumenthal, who began the hearing with an audio clip mimicking his voice that he said was generated by artificial intelligence trained on his floor speeches, told reporters. “Not just in the words and the rhetoric but in actual actions and his willingness to participate and commit to specific action.”

Altman’s appearance comes as Washington policymakers are increasingly waking up to the threat of artificial intelligence, as ChatGPT and other generative AI tools have dazzled the public but unleashed a fleet of safety concerns. Generative AI, which backs chatbots like ChatGPT and the text-to-image generator Dall-E, creates text, images or sounds, often with human-appearing flair, and has also created concerns about the proliferation of false information, data privacy, copyright abuses and cybersecurity.

The Biden administration has called AI a key priority, and lawmakers have repeatedly said they want to avoid the same mistakes they’ve made with social media.

Lawmakers expressed regret over their relatively hands-off approach to the tech industry before the 2016 elections. Their first hearing with Zuckerberg occurred in 2018, once Facebook was a mature company and embroiled in scandal after the revelation that Cambridge Analytica siphoned the data of 87 million Facebook users.

Yet despite broad bipartisan agreement that AI presents a threat, lawmakers have not coalesced around rules to govern its use or development. Blumenthal said Tuesday’s hearing “successfully raised” hard questions about AI but had not answered them. Senate Majority Leader Charles E. Schumer (D-N.Y.) has been developing a new AI framework that he says would “deliver transparent, responsible AI while not stifling critical and cutting edge innovation.” But his office has not released any specific bills or commented on when it might be finished.


<iframe width='480' height='290' scrolling='no' src='https://www.washingtonpost.com/video/c/embed/be9bfe99-187e-4f33-9c46-ccc07219a332' frameborder='0' webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>


A group of Democrats — including Sens. Amy Klobuchar of Minnesota, Cory Booker of New Jersey and Michael F. Bennet of Colorado, as well as Rep. Yvette D. Clarke of New York — introduced legislation to address the threats generative AI presents to elections. Their Real Political Ads Act would require a disclaimer on political ads that use AI-generated images or video.

Lawmakers displayed uneasiness about generative AI’s potential to influence elections. Hawley, who led the charge to object to the results of the 2020 election on the false premise that some states failed to follow the law, questioned Altman on how generative AI might sway voters, citing research suggesting large language models can predict human survey responses.

“It’s one of my areas of greatest concern — the more general ability of these models to manipulate, to persuade, to provide sort of one-on-one interactive disinformation,” Altman said.

Altman said OpenAI has adopted some policies to address these risks, which include barring using ChatGPT from “generating high volumes of campaign materials,” but asked policymakers to consider regulation around AI.

Altman’s rosy reception signals the success of his recent charm offensive, which included a dinner with lawmakers Monday night about artificial intelligence regulation and a private huddle after Tuesday’s hearing with House Speaker Kevin McCarthy (R-Calif.), House Minority Leader Hakeem Jeffries (D-N.Y.) and members of the Congressional Artificial Intelligence Caucus.

About 60 lawmakers from both parties attended the dinner with Altman, where the OpenAI CEO demonstrated ways they could use ChatGPT, according to a person in the room who spoke on the condition of anonymity to discuss the private dinner.

Lawmakers were amused when Altman prompted ChatGPT to write a speech on behalf of Rep. Mike Johnson (R-La.) about introducing a pretend bill to name a post office after Rep. Ted Lieu (D-Calif.), the person said. Yet the dinner included more serious conversation about how policymakers can ensure the United States leads the world on artificial intelligence.

The sharpest critiques of Altman came from another witness: Marcus, the NYU professor emeritus, who warned the panel that it was confronting a “perfect storm of corporate irresponsibility, widespread deployment, lack of regulation and inherent unreliability.”

Marcus warned that lawmakers should be wary of trusting the tech industry, noting that there are “mind boggling” sums of money at stake and that companies’ missions can “drift.”

Marcus critiqued OpenAI, citing a divergence from its original mission statement to advance AI to “benefit humanity as a whole” unconstrained by financial pressures. Now, Marcus said, the company is “beholden” to its investor Microsoft, and its rapid release of products is putting pressure on other companies — most notably Google parent company Alphabet — to swiftly roll out products too.

“Humanity has taken a back seat,” Marcus he said.

In addition to creating a new regulatory agency, Altman proposed creating a set of safety standards for AI models, testing whether they could go rogue and start acting on their own. He also suggested that independent experts could conduct audits, testing the performance of the models on various metrics.


<iframe width='480' height='290' scrolling='no' src='https://www.washingtonpost.com/video/c/embed/ac5aed31-45de-4a5e-8255-a1909eb3ba42' frameborder='0' webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

Sam Altman, CEO of OpenAI, said on May 16 that his system is not protected under Section 230, and that there is a need for a new legal framework for AI. (Video: The Washington Post, Photo: Reuters/The Washington Post)


However, Altman sidestepped other suggestions, such as requirements for transparency in the training data that AI models use. OpenAI has been secretive about the data it uses to train its models, while some rivals are building open-source models that allow researchers to scrutinize the training data.

Altman also dodged a call from Sen. Marsha Blackburn (R-Tenn.) to commit to not train OpenAI’s models on artists’ copyrighted works, or to use their voices or likenesses without first receiving their consent. And when Booker asked whether OpenAI would ever put ads in its chatbots, Altman replied, “I wouldn’t say never.”

But even Marcus appeared to soften toward Altman, saying toward the end of the hearing that sitting beside him, “his sincerity in talking about fears is very apparent physically in a way that just doesn’t communicate on the television screen.”


The man who unleashed AI on an unsuspecting Silicon Valley


ChatGPT: Inside the latest version with OpenAI CEO Sam Altman

« Last Edit: September 27, 2024, 06:15:24 AM by Administrator »

*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: AI could cause ‘harm to the world’
« Reply #1 on: September 27, 2024, 06:01:39 AM »
[FULL] Real Time with Bill Maher 10/13/2023 | HBO Bill Maher October 13, 2023 FULL 720HD
Tristan Harris: Real Time with Bill Maher (HBO)

You need a MAX (HBO) account to view this video

https://play.max.com/video/watch/526e6487-3e3f-4478-b985-91fbcfc6b3db/824cb2f8-610f-41fe-a227-19ff9d79065a


The A.I. Dilemma - March 9, 2023 -
Center for Humane Technology - Tristan Harris






Tristan Harris: Beyond the AI dilemma | CogX Festival 2023




Technologist Tristan Harris On “The Social Dilemma” and “The A.I. Dilemma”




Tristan Harris Congress Testimony: Understanding the Use of Persuasive Technology




How a handful of tech companies control billions of minds every day | Tristan Harris




Tristan Harris Says Tech Companies Have Opened Pandora's Box




This is the 'birth of a different age': Tristan Harris | Brian Kilmeade Show




"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview


*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: AI could cause ‘harm to the world’
« Reply #2 on: September 27, 2024, 06:04:38 AM »
🔥 UNMASKING AI Book Launch | Dr. Joy Buolamwini in conversation with Sinead Bovell | 10/31/23




Dr. Joy will be in conversation with Sinead Bovell, a futurist and the founder of WAYE. This event is a collaboration between All Tech Is Human, Algorithmic Justice League, Ford Foundation, Random House, and The Institute of Global Politics (IGP) at the School of International and Public Affairs (SIPA) at Columbia University. The talk will have live captioning provided and is being held at the Ford Foundation (Susan Berresford room), which has limited capacity and will fill up quickly. To learn more about Unmasking AI, please visit the official book website at www.unmasking.ai.

Dr. Joy Buolamwini is the founder of the Algorithmic Justice League, a groundbreaking MIT researcher, a model, and an artist. She is the author of Unmasking AI: My Mission to Protect What Is Human in a World of Machines and advises world leaders on preventing AI harms. Her research on facial recognition technologies transformed the field of AI auditing. Her TED talk on algorithmic bias has been viewed over 1.6 million times.

Dr. Joy lends her expertise to congressional hearings and government agencies seeking to enact equitable and accountable AI policy. As the Poet of Code, she creates art to illuminate the impact of AI on society. Her writing has been featured in publications like TIME, The New York Times, Harvard Business Review, and The Atlantic. Her work as a spokesmodel has been featured in Vogue, Allure, Harper's Bazaar, and People Magazine. She is the protagonist of the Emmy-nominated documentary Coded Bias which is available to over 100 million viewers.

Sinead Bovell is a futurist and the founder of WAYE, an organization that prepares youth for a future with advanced technologies, with a focus on non-traditional and minority markets. Sinead is a regular tech commentator for CNN, talk shows, and morning shows; she's been dubbed the A.I. educator for the “non-nerds” by Vogue Magazine; and to date has educated over 200, 000 young entrepreneurs on the future of technology.

Sinead is an 8x United Nations speaker; she has given formal addresses to presidents, royalty and Fortune 500 leaders on topics ranging from cybersecurity to artificial intelligence, and currently serves as a strategic advisor to the United Nations International Telecommunication Union on digital inclusion.


https://www.msnbc.com/morning-joe/watch/how-to-protect-what-is-human-in-a-world-of-machines-196780613511


Dr. Joy Buolamwini — Unmasking AI: My Mission to Protect What Is Human in a World of Machines




Watch author Dr. Joy Buolamwini 's book talk and reading at Politics and Prose book store in Washington, D.C.

PURCHASE BOOK HERE: https://www.politics-prose.com/book/9...


To most of us, it seems like recent developments in artificial intelligence emerged out of nowhere to pose unprecedented threats to humankind. But to Dr. Joy Buolamwini, who has been at the forefront of AI research, this moment has been a long time in the making.

After tinkering with robotics as a high school student in Memphis and then developing mobile apps in Zambia as a Fulbright fellow, Buolamwini followed her lifelong passion for computer science, engineering, and art to MIT in 2015. As a graduate student at the “Future Factory,” she did groundbreaking research that exposed widespread racial and gender bias in AI services from tech giants across the world.

Unmasking AI goes beyond the headlines about existential risks produced by Big Tech. It is the remarkable story of how Buolamwini uncovered what she calls “the coded gaze”—the evidence of encoded discrimination and exclusion in tech products—and how she galvanized the movement to prevent AI harms by founding the Algorithmic Justice League. Applying an intersectional lens to both the tech industry and the research sector, she shows how racism, sexism, colorism, and ableism can overlap and render broad swaths of humanity “excoded” and therefore vulnerable in a world rapidly adopting AI tools. Computers, she reminds us, are reflections of both the aspirations and the limitations of the people who create them.

Encouraging experts and non-experts alike to join this fight, Buolamwini writes, “The rising frontier for civil rights will require algorithmic justice. AI should be for the people and by the people, not just the privileged few.”

Dr. Joy Buolamwini is the founder of the Algorithmic Justice League, a groundbreaking researcher, and a renowned speaker. Her writing has been featured in publications including TIME, The New York Times, and The Atlantic. As a poet of code, she creates art to illuminate the impact of artificial intelligence on society and advises world leaders on preventing AI harms. She is the recipient of numerous awards, including the Rhodes Scholarship, the inaugural Morals & Machines Prize, and the Technological Innovation Award from the Martin Luther King, Jr. Center for Nonviolent Social Change. She is the protagonist of the Emmy-nominated documentary Coded Bias. Buolamwini lives in Cambridge, Massachusetts.

*

Offline droidrage

  • *****
  • 4091
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Re: AI could cause ‘harm to the world’
« Reply #3 on: September 27, 2024, 06:06:43 AM »
This is the future of AI video. - The future of AI video is here, super weird flaws and all


Each of these AI videos seems shockingly realistic at first. But soon you’ll spot the weirdness.

https://www.washingtonpost.com/technology/interactive/2024/ai-video-sora-openai-flaws/?itid=hp-top-table-main_p001_f004


At first glance, the images amaze and confound: A woman strides along a city street alive with pedestrians and neon lights. A car kicks up a cloud of dust on a mountain road.

But upon closer inspection, anomalies appear: The dust plumes don’t always quite line up with the car’s rear wheels. And those pedestrians are stalking that woman like some eerie zombie horde.

This is Sora, a new tool from OpenAI that can create lifelike, minute-long videos from simple text prompts. When the company unveiled it on Feb. 15, experts hailed it as a major moment in the development of artificial intelligence. Google and Meta also have unveiled new AI video research in recent months. The race is on toward an era when anyone can almost instantly create realistic-looking videos without sophisticated CGI tools or expertise.

Disinformation researchers are unnerved by the prospect. Last year, fake AI photos of former president Donald Trump running from police went viral, and New Hampshire primary voters were targeted this January with fake, AI-generated audio of President Biden telling them not to vote. It’s not hard to imagine lifelike fake videos erupting on social media to further erode public trust in political leaders, institutions and the media.

For now, Sora is open only to testers and select filmmakers; OpenAI declined to say when Sora will available to the general public. “We’re announcing this technology to show the world what’s on the horizon,” said Tim Brooks, a research scientist at OpenAI who co-leads the Sora project.

The videos that appear here were created by the company, some at The Washington Post’s request. Sora uses technology similar to artificial intelligence chatbots, such as OpenAI’s ChatGPT, to translate human-written prompts into requests with sufficient detail to produce a video.

Some are shockingly realistic. After Sora was asked to create a scene from California’s rugged Big Sur coastline, the AI tool’s output is stunning.

AI-GENERATED FAKE VIDEO

https://d21rhj7n383afu.cloudfront.net/washpost-production/TWP/20240223/65d8bb522f7d470b33c3029e/65d8bbf4d9dfd01cdcfb01ce/file_1280x720-2000-v3_1.mp4
Prompt: Drone view of waves crashing against the rugged cliffs along Big Sur’s garay point beach. The crashing blue waters create white-tipped waves, while the golden light of the setting sun illuminates the rocky shore. A small island with a lighthouse sits in the distance, and green shrubbery covers the cliff’s edge. The steep drop from the road down to the beach is a dramatic feat, with the cliff’s edges jutting out over the sea. This is a view that captures the raw beauty of the coast and the rugged landscape of the Pacific Coast Highway. (OpenAI)

REAL-LIFE VIDEO

https://d21rhj7n383afu.cloudfront.net/washpost-production/TWP/20240223/65d8bbff743d051279a43ac4/65d8bc40d9dfd01cdcfb01cf/file_1280x720-2000-v3_1.mp4
Aerial video uploaded in 2023 by Philip Thurston of the actual coastline of Big Sur in California. (Philip Thurston/Getty Images)


Although “garay point beach” is not a real place, Sora produced a video that is almost indistinguishable from this real video of the Big Sur coast near Pfeiffer Falls shot by photographer Philip Thurston. If anything, the fake scene looks more majestic than the real one.

Humans and animals are harder. But here, too, Sora produces surprisingly lifelike results. Take a look at this scene of a cat demanding breakfast.

https://d21rhj7n383afu.cloudfront.net/washpost-production/TWP/20240223/65d8bdf82f7d470b33c3040e/65d8be5eb711eb5ff5a10340/file_1280x720-2000-v3_1.mp4

https://d21rhj7n383afu.cloudfront.net/washpost-production/Getty/20240223/65d8be6a2f7d470b33c30492/65d8beb1d9dfd01cdcfb01d8/file_1280x720-2000-v3_1.mp4

Sora seems to have trouble with cause and effect, so when the cat moves its left front paw, another appendage sprouts to replace it. The person’s hand is accurately rendered — a detail previous AI tools have struggled with — but it’s in an odd spot.

Sora was created by training an AI algorithm on countless hours of videos licensed from other companies and public data scraped from the internet, said Natalie Summers, a spokesperson for the Sora project. By ingesting all that video, the AI amasses knowledge of what certain things and concepts look like. Brooks compared the model’s growth to the way humans come intuitively to understand the world instead of explicitly learning the laws of physics.

Successive versions of the model have gotten better, said Bill Peebles, the other co-lead on the Sora project. Early versions couldn’t even make a credible dog, he said. “There would be legs coming out of places where there should not be legs.”

But Sora clearly is confounded by how to light a cigarette. It knows the process involves hands, a lighter and smoke, but it can’t quite figure out what the hands do or in what order.

“The model is definitely not yet perfect,” Brooks said.

And even when Sora gets it right, problems may lurk.

OpenAI has a partnership with Shutterstock to use its videos to train AI. But because Sora is also trained on videos taken from the public web, owners of other videos could raise legal challenges alleging copyright infringement. AI companies have argued that using publicly available online images, text and video amounts to “fair use” and is legal under copyright law. But authors, artists and news organizations have sued OpenAI and others, saying they never gave permission or received payment for their work to be used this way.

The AI field is struggling with other problems, as well. Sora and other AI video tools can’t produce sound, for example. Although there has been rapid improvement in AI tools over the past year, they are still unpredictable, often making up false information when asked for facts.

Meanwhile, “red teamers” are assessing Sora’s propensity to create hateful content and perpetuate biases, said Summers, the project spokesperson.

Still, the race to produce lifelike AI videos isn’t slowing down. One of Google’s efforts, called Lumiere, can fill in pieces cut out of real videos.

“Our primary goal in this work is to enable novice users to generate visual content in a creative and flexible way,” Google said in a research paper. The company declined to make a Lumiere researcher available for an interview.

Other companies have begun commercializing AI video technology. New York-based start-up Runway has developed tools to help people quickly edit things into or out of real video clips.

OpenAI has even bigger dreams for its tech. Researchers say AI could one day help computers understand how to navigate physical spaces or build virtual worlds that people could explore.

“There’s definitely going to be a new class of entertainment experiences,” Peebles said, predicting a future in which “the line between video game and movie might be more blurred.”

*

Offline Administrator

  • *****
  • 4091
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
Re: AI could cause ‘harm to the world’
« Reply #4 on: September 27, 2024, 06:09:23 AM »
How AI could monitor brain health and find dementia sooner

Using advanced artificial intelligence algorithms, scientists are hoping to identify brain wave patterns associated with the risk of dementia.

https://www.washingtonpost.com/wellness/2024/09/24/alzheimers-detection-ai-technology-advances/





Imagine a sleek, portable home device that resembles a headband or cap, embedded with tiny electrodes. Placed on the head, these sensors detect subtle brain wave activity, behaving like a pulse-detecting smartwatch, a blood pressure wrist cuff or a heart rate monitor.

But this tool isn’t checking your heartbeat. Using advanced artificial intelligence algorithms to analyze data in real time, a device like this could look for signs of Alzheimer’s disease years before symptoms become apparent. Such a monitor is not yet available, but AI could make it a reality.

“The readout could be as simple as a traffic light system — green for healthy activity, yellow for something to keep an eye on and red for when it’s time to consult a health care professional,” said David T. Jones, who directs the Neurology AI Program at the Mayo Clinic. “You would be able to monitor your brain health the same way you now can monitor your heart rate and blood pressure. We’re not there yet, but that is the future.”

It could be a decade or longer before such technology is in widespread use, but the science is “moving quickly,” said Jones.

Mayo’s brain waves research is just one way scientists are working to harness the power of artificial intelligence to pinpoint early indicators of cognitive impairment. Scientists are using AI to study blood biomarkers — several are linked to Alzheimer’s disease. And AI is helping them search for data that can connect dementia to such chronic health conditions as inflammation, certain vision problems, high cholesterol, hypertension, diabetes and osteoporosis.

AI makes these efforts possible because it can analyze massive amounts of complicated data from electronic patient health records with enormous speed, and often with the ability to detect nuances imperceptible to humans.

“We want to find ways to detect dementia as early as possible,” said Jennie Larkin, deputy director of the Division of Neuroscience at the National Institute on Aging. “AI is primarily helpful in understanding and managing big data too large or complex for traditional analyses. Its potential is to be an incredible assistant in helping us understand rich medical data and identify possibilities we never could unassisted.”

Answers at incredible speed

AI already is in use in other health care settings, including screening mammograms, and researchers are excited about its potential contributions to brain health.

“AI should accelerate our ability to predict an increase in risk for chronic diseases,” said Judy Potashkin, professor and discipline chair of cellular and molecular pharmacology at the Chicago Medical School Center for Neurodegenerative Disease and Therapeutics.

Alzheimer’s disease is the most common form of dementia, afflicting an estimated 5.8 million Americans older than 65 in 2020, according to the Centers for Disease Control and Prevention. The number is expected to nearly triple to 14 million by 2060. The disease is marked by progressive memory loss, personality changes, and ultimately the inability to perform routine daily tasks, such as bathing and dressing and paying bills.

Some people are nervous about the growing use of AI, fearing it will replace the work of humans. But experts insist it only will enhance it.

“AI is high-powered and has many databases to search, and can do so with incredible speed,” said Arthur Caplan, professor of bioethics at NYU Langone Health. “Humans get tired. AI does not.”




These AI-generated analyses of brain waves provide a deeper understanding of brain health that advance research in some neurological conditions. (Mayo Clinic College of Medicine)


Finding patterns too subtle for humans to spot


AI also has the potential to bridge the gap in expertise between seasoned clinicians and less experienced providers. For example, AI could recognize subtle signs, such as changes in a patient’s voice, that could help diagnose neurological disorders such as Parkinson’s disease, Alzheimer’s or amyotrophic lateral sclerosis (ALS). “Much of what experts do involves recognizing patterns from training and experience, something AI can help nonexperts replicate,” said Jones.

In the brain waves research that Jones believes eventually could result in home-based monitors, Mayo scientists used AI to scan electroencephalograms (EEGs) for abnormal patterns that are characteristic of patients with cognitive problems such as Alzheimer’s disease.

They studied data from more than 11,000 patients who received EEGs at the Mayo Clinic, identifying specific differences, including changes in brain waves in the front and back of the brain.

“Humans cannot see them, but machines can,” Jones said. The hope is that some day clinicians will use AI to catch these patterns early, before memory problems become apparent.

A team at Massachusetts General Hospital used AI and magnetic resonance imaging (MRI) to develop an algorithm to detect Alzheimer’s. They trained the model using nearly 38,000 brain images from about 2,300 patients with Alzheimer’s and about 8,400 who didn’t have the disease.

They then tested the model across five datasets of images to see whether it could accurately identify Alzheimer’s. It did so with 90.2 percent accuracy, said Matthew Leming, a research fellow in radiology at the hospital’s Center for Systems Biology and one of the study authors.

One challenge in interpreting the MRI data for future research is that “people only come in to get MRI scans when they have symptoms of something else,” which could confound the results. “If a person comes into a hospital for an MRI, it’s not usually because they are healthy,” he said.

Using cholesterol or osteoporosis to predict Alzheimer’s

At the University of California at San Francisco, researchers used AI to design an algorithm to determine whether having certain health conditions could predict who might develop the disease in the future. The conditions included hypertension, high cholesterol and vitamin D deficiency in both men and women, erectile dysfunction and an enlarged prostate in men, and osteoporosis in women.

They designed the model using a clinical database of more than 5 million people both with and without Alzheimer’s. In a separate group of non-Alzheimer’s patients, the algorithm predicted with 72 percent accuracy those who would eventually receive an Alzheimer’s diagnosis within seven years.

The research raises the hopeful prospect that preventing and treating these conditions might help protect against eventual dementia, said Alice Tang, one of the study authors.

The association of these conditions to Alzheimer’s “was stronger than that among people who did not have any of these other health issues,” said Tang, a bioengineer and medical student. However, she said it’s important to remember that “not everyone who has Alzheimer’s has these conditions and not everyone who has these conditions will develop Alzheimer’s. It’s just a red flag. One predictive tool that needs further study.”

Will people want to know?

Some experts urge caution, emphasizing that much of the work with AI is still preliminary. “We don’t necessarily have enough data to see if any of these tools have been validated to predict someone’s risk,” said Rebecca Edelmayer, vice president of scientific engagement for the Alzheimer’s Association.

Today, Alzheimer’s and other forms of dementia usually are diagnosed only once symptoms appear. There are several drugs that might slow it down, although they don’t work for everyone, and their efficacy can wane over time.

The potential of AI to enable early diagnosis raises many of the same issues that greeted the early use of genetic testing.

“Overall, AI in this case is a good thing,” Caplan said. “But it carries a big ‘but,’” including the potential for health insurance and employer discrimination, he said. But the biggest questions, he added, are: Will people want to know? And if so, what will they do with that information?

“To be honest, I would do nothing,” said Joel Shurkin, a retired science writer from Baltimore whose wife, marine biologist Carol Howard, suffered from early-onset Alzheimer’s and died in 2019 at 70. “Except for a few meds, there is nothing to be done,” he said.

Kathleen, 76, from Bethesda, Md. (using only her first name to protect her privacy), lost her 82-year-old husband in April to Alzheimer’s complications. His mother and older sister also had died of the disease, so the couple were not surprised when he was diagnosed in his mid-70s.

“We already were living with the risk and had our affairs in order,” she said. Knowing in advance “foretells a long, slow death with devastating psychological and financial consequences,” she said.

One of their daughters, now in her 40s, enrolled in research monitoring her brain health with the hope of catching it early. Kathleen believes AI research ultimately will make a dramatic difference in early diagnosis and treatment. “I think it will be miraculous,” she said.

Caplan said there are some advantages to knowing that dementia looms in your future.

“You can plan your life,” he said. “Take that vacation next year instead of waiting. Get your affairs in order. Discuss it so everybody will be ready, which is of great value to others.”

NIA’s Larkin noted that finding the disease sooner “may provide opportunities for new treatments.”

“It’s very hopeful how much we are learning,” she said.

Caplan agrees. “By the time you are unable to speak and walk, it’s very hard to repair the brain,” he said. “Early detection raises the hope you will be able to try new interventions before the damage occurs. I’m not saying this will happen, but the potential of AI certainly opens the door.”