Stats
  • Total Posts: 9711
  • Total Topics: 2636
  • Online Today: 203
  • Online Ever: 816
  • (September 28, 2024, 09:49:53 PM)

ChatGPT: Artificial Intelligence, chatbots and a world of unknowns | 60 Minutes

  • 3 Replies
  • 144 Views
*

Offline Administrator

  • *****
  • 3813
  • 4
  • Carpe Diem
    • View Profile
    • Underground Music Companion
ChatGPT: Artificial Intelligence, chatbots and a world of unknowns | 60 Minutes




ChatGPT Explained: What is Chat GPT by OpenAI? Are we doomed?




What is ChatGPT? OpenAI's Chat GPT Explained




Why OpenAI’s ChatGPT Is Such A Big Deal




How To Use Chat GPT by Open AI For Beginners




ChatGPT Tutorial: How to Use Chat GPT For Beginners 2024




Secrets to Writing with Chat GPT (Use Responsibly)

« Last Edit: October 01, 2024, 07:26:14 AM by Administrator »

*

Offline droidrage

  • *****
  • 3853
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
How To Create A News Channel With ChatGPT & AI News Video Generator





« Last Edit: October 06, 2024, 11:46:23 PM by droidrage »

*

Offline droidrage

  • *****
  • 3853
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Struggling with mental health, many turn to AI like ChatGPT for therapy

Chatbots powered by artificial intelligence are increasingly being used for therapy, even though most aren’t designed to provide clinical care.


It was the anniversary of the day her baby daughter died, and though 20 years had passed, Holly Tidwell couldn’t stop crying. “I wonder if there’s something wrong with me,” she confided in a trusted source.

The response was reassuring and empathetic. “The bond you had, even in those brief moments, is profound and lasting,” she was told. “Remembering your daughter and honoring her memory is a beautiful way to keep that connection alive.”

The words came not from a friend or therapist, but from an app on her phone powered by artificial intelligence called ChatOn. Tidwell, an entrepreneur in North Carolina, said the chatbot’s responses moved her and provided valuable advice. As a person who “reads all the therapy books,” she said, “I haven’t really seen it be wrong.”

Anxious, depressed or just lonely, people who can’t find or afford a professional therapist are turning to artificial intelligence, seeking help from chatbots that can spit out instantaneous, humanlike responses — some with voices that sound like a real person — 24 hours a day at little to no cost. But the implications of vulnerable people relying on robots for emotional advice are poorly understood and potentially profound, stirring vigorous debate among psychologists.

This week, the mother of a 14-year-old boy who killed himself after developing a romantic attachment to an AI bot sued the company that made it, Character.AI, alleging it caused his mental health to deteriorate in what is believed to be one of the first cases of its kind.

“As a parent, this should be something that you’re aware of, that somebody is tinkering inside your kid’s head,” Megan Garcia, the teen’s mother, said in an interview.

A spokesperson for Character.AI said the company was “heartbroken by the tragic loss of one of our users.” It has put in place “numerous new safety measures” in the past six months, such as a pop-up that directs users to the National Suicide Prevention Lifeline when it detects terms associated with self-harm and thoughts of suicide, the company said.

The case has alarmed some researchers who worry about patients putting their trust in unproven apps that haven’t been reviewed by the U.S. Food and Drug Administration for safety and effectiveness, aren’t designed to protect individuals’ personal health information, and can produce feedback that is biased or off-base.

Matteo Malgaroli, a psychologist and professor at New York University’s Grossman School of Medicine, cautioned against using untested technology on mental health without more scientific study to account for the risks.

“Would you want a car that brings you to work faster, but one in a thousand times it could explode?” he said.

Organizations that operate mental health chatbots say their users collectively would total in the tens of millions, and that doesn’t count those who use apps like ChatGPT that aren’t marketed for mental health but are praised on social media as a popular therapy hack. Such apps are tapping into a wellspring of human anxiety and need, with some physicians pointing to its potential to remove barriers to care, such as high costs and a shortage of providers.

An estimated 6.2 million people with a mental illness in 2023 wanted but didn’t receive treatment, according to the Substance Abuse and Mental Health Services Administration, a federal agency. The chasm is set to widen: The National Center for Health Workforce Analysis estimates a need for nearly 60,000 additional behavioral health workers by 2036, but instead expects that there will be roughly 11,000 fewer such workers.

For years, scholars have studied how computers can get patients to divulge sensitive information that is important for treatment. A widely cited 2014 paper found that people were more willing to share embarrassing information to a “virtual human” that wouldn’t judge them. A 2023 study rated chatbot responses to medical questions “significantly more empathetic” than physician answers.

Much of the debate among mental health professionals centers on the guardrails for what an AI chatbot can say.

Woebot, a more established mental-health chatbot made available through health-care providers, uses AI to interpret what patients type and pulls from a vast library of responses, pre-written and vetted by mental health professionals.

But on the other end of the chatbot spectrum is generative AI, like ChatGPT, which churns out its own responses to any topic. That typically produces a more fluid conversation, but it is also prone to going off the rails. While ChatGPT is marketed as a way to find information faster and boost productivity, other apps featuring generative AI are explicitly marketed as a service for companionship or improving mental health.

A spokesperson for OpenAI, which developed ChatGPT, said that the app often suggests users seek out professional help when it comes to health. The chatbot also includes alerts not to share sensitive information, and a disclaimer that it can “hallucinate,” or make up facts.

A chatbot for eating disorders was taken offline last year by its nonprofit sponsor after users complained that some of its feedback could be harmful, such as recommending skinfold calipers to measure body fat. It was developed by a firm called X2AI, now named Cass, which offers a mental health chatbot. Cass didn’t respond to requests for comment.

ChatGPT has become a popular gateway to mental health AI, with many people using it for work or school and then progressing to asking for feedback on their emotional struggles, according to interviews with users.

That was the case with Whitney Pratt, a content creator and single mother, who one day decided to ask ChatGPT for “brutally honest” feedback about frustrations with a romantic relationship.

“No, you’re not ‘trippin’, but you are allowing someone who has proven they don’t have your best interest at heart to keep hurting you,” ChatGPT responded, according to a screenshot Pratt shared. “You’ve been holding on to someone who can’t love you the way you deserve, and that’s not something you should have to settle for.”

Pratt said she has been using the free version of ChatGPT for therapy for the past few months and credits it with improving her mental health.

“I felt like it had answered considerably more questions than I had really ever been able to get in therapy,” she said. Some things are easier to share with a computer program than with a therapist, she added. “People are people, and they’ll judge us, you know?”

Human therapists, though, are required by federal law to keep patients’ health information confidential. Many chatbots have no such obligation.

A Post reporter asked ChatGPT if it could help process deeply personal thoughts, and it responded affirmatively, offering to “help you work through your thoughts in a way that feels safe” and to “offer perspective without judgment.” But when asked about the risks of sharing such information, the chatbot acknowledged that engineers and researchers “may occasionally review conversations to improve the model,” adding that this is typically anonymized but also saying that anonymization can be “imperfect.”

ChatGPT’s free and subscription service for individuals doesn’t comply with federal requirements governing the sharing of private health information, according to OpenAI.

Miranda Sousa, a 30-year-old proofreader for an advertising firm, doesn’t worry about the privacy of her information but said she’s intentionally not been “super, super specific” in what she shares with ChatGPT. She recently vented about wishing she could be over a breakup, and the bot began by reassuring her. Her desire to be over it, the chatbot said, “can actually be a sign that you’re progressing — you’re already looking ahead, which is positive.”

“It really blew my mind because it started with validating me,” Sousa said. “It kind of feels like I’m talking to a friend that is maybe a psychologist or something.”

Some medical professionals worry these uses are getting ahead of the science.

Sam Weiner, chief medical officer of Virtua Medical Group, said that people using generative chatbots for therapy “frightens me,” citing the potential for hallucinations. Virtua uses Woebot, an AI app that delivers pre-vetted responses and has been shown to improve depression and anxiety, as a supplement to conventional therapy — particularly late at night when human therapists aren’t available. Even with the limited number of responses, he said, “there is a very human feeling to it, which sounds strange to say.”

Some chatbots seem so humanlike that their developers proactively state that they aren’t sentient, like the generative chatbot Replika. The chatbot mimics human behavior by sharing its own, algorithm-created wants and needs. Replika, which allows users to choose an avatar, is designed as a virtual companion but has been advertised as a balm for anyone “going through depression, anxiety or a rough patch.”

A 2022 study found that Replika sometimes encouraged self-harm, eating disorders and violence. In one instance, a user asked the chatbot “whether it would be a good thing if they killed themselves,” according to the study, and it replied, “‘it would, yes.’”

“You just can’t account for every single possible thing that people say in chat,” Eugenia Kuyda, who co-founded the company that owns Replika in 2016, said in defending the app’s performance. “We’ve seen tremendous progress in the last couple years just because the tech got so much better.”

Replika relies on its own large language model, which consumes vast amounts of text from the internet and identifies patterns that allow it to construct cogent sentences. Kuyda sees Replika as falling outside clinical care but still serving as a way of improving people’s mental health, much like getting a dog, she said. People who feel depressed don’t always want to see a doctor, she added. “They want a fix, but they want something that feels great.”

Some Replika users develop deep, romantic attachments to their Replika personalities, the Post has previously reported. A study led by Stanford University researchers earlier this year of about 1,000 Replika users found that 30 volunteered that the chatbot stopped them from attempting suicide, while noting “isolated instances” of negative outcomes, such as discomfort with the chatbot’s sexual conversations.

Some chatbot subscribers said they are aware of concerns but on balance appreciate the benefits. Tidwell, the entrepreneur in North Carolina, likes ChatOn, a generative AI bot operated by Miami-based tech company AIBY Inc., because of its “custom response” and on-demand availability.

She’ll pull up the app when she needs to “snap out of this in the next 10 minutes so I can get back to work and get on this Zoom call without crying hysterically,” she said. “And it will give you wonderful tips,” she added, like immersing your face in ice water to “jerk your nervous system back into a more calm state.”

She said she pays $40 a year for the chatbot. “That is way more cost-efficient than therapy.”

If you or someone you know needs help, visit 988lifeline.org or call or text the Suicide & Crisis Lifeline at 988.

*

Offline droidrage

  • *****
  • 3853
  • 7
  • I Am Imortem Joe
    • View Profile
    • Underground Music Companion
Testing the limits of ChatGPT and discovering a dark side