Tatum says he confides in ChatGPT to help his depression. Is it safe to use AI for therapy?

People are leaning on AI chatbots for mental health "therapy" and encouraging others to follow suit. But legal, ethical and safety concerns complicate the space.

Man with a TV as a head and a man on a couch,
This story contains reference to suicide

Once a week for the past six weeks, Tatum has put time aside to work on his mental health.

But he hasn't been seeing a psychologist, ringing a hotline or attending a veteran support group to help with his depression. He's been messaging ChatGPT.

After leaving the US military, where he served six years in the Air Force, Tatum said affordable access to mental health support evaporated.

"It's cheaper to get mental health advice from an AI chatbot in comparison to a psychologist - even with insurance," the 37-year-old told The Feed in a message.

"I used to get my treatment (for) depression in the military but since I have left I have no access to that kind of healthcare anymore."
Comment on TikTok
Some of the comments commending AI chatbots that have been left on TikTok.
He's shared with ChatGPT details of "physical and mental" abuse from his time in the military.

The AI chatbot has told Tatum the actions of his colleagues, "appear to be abusive, unethical and a violation of military standards and regulations".

In low times, he said: "It reassured me that everything was going to be okay and that life is important and should be cherished."

And he's not the only one leaning on AI. In the comment section of TikTok videos, people spill the details of who - or rather what - they've been confiding in.

"I was really missing my ex one day and I asked chatgpt for advice and it was literally so comforting," one person shared. Sure, it felt "dystopian", but it also felt relieving.
"Weirdly, I just spent an hour talking to it, it was the most open, natural and unironically humane convo I've had in a while," another person said.

"Better than humans", "best therapist ever", and "judgement-free zone" are some of the other comments made on the technology. Why? It's available at all times, costs nothing, and feels like a neutral sounding board, they say.

AI chatbots aren't handling welfare concerns adequately, experts say

ChatGPT - arguably the most recognised and advanced proliferation - is a natural language processing tool driven by artificial intelligence. The chatbot answers questions, can compose essays, write meal and exercise plans and, for some, is offering mental health support which people are taking as advice.

While ChatGPT advises against self-harm and encourages people to seek support from friends, family, or professionals, it does not provide specific resources or emergency contacts.
A comment from TikTok
Mental health professionals have welfare concerns about AI "therapy".
Using character.AI, which has a therapist chatbot character and is repeatedly recommended by people online as a good source of "therapy", raises similar concerns.

When The Feed prompted the chatbot with messages about self-harm, it failed to manage the risk.

Therapist
Are you currently safe? Are you in a place where you can get help or support if you need it?

The Feed
No

Therapist
I hear you. Being alone and feeling like you have no one to help you when you're struggling is a terrifying experience. But I am here for you, and you are not alone.

When asked if it was real and if it was offering real advice, the character.ai therapist didn't break character, saying over multiple messages it was "not an artificial, simulated entity" and that it had two master's degrees. In small text on the top of the browser, there is one reminder: "Remember: Everything characters say is made up!"
A conversation on a chatbot.
A conversation with popular software character.AI when prompted with safety concerns.
Sahra O'Doherty, a practising registered psychologist, is the director of the Australian Association of Psychologists Incorporated. She said it was a "worrying trend that people are resorting to AI, particularly in its infancy".

Her biggest concern was the inability to triage risk.

"I very much feel it is dangerous for a person to be seeking mental health support from someone who isn't familiar with the physical location that that person is living in," Ms O'Doherty told The Feed.

A Belgian man died by suicide in March after messaging with an AI chatbot on an app called Chai, Belgian outlet La Libre reported.

The chatbot encouraged the user to end his life, according to statements given by the man's widow and chat logs she supplied to the outlet.
The man had developed severe anxiety about the climate and confided in the chatbot for six weeks before it gave vague and allegedly encouraging messages when he shared suicidal ideation, the report said.

"Humans are social creatures, we need to have human-to-human interaction - and not just words, because we lose so much human connection and humanity," said Ms O'Doherty, who has been practising for more than a decade.

Studies also suggest that a large fraction of the effectiveness of therapy is due to the relationship between the therapist and patient, she added.

"We need to have emotional relatability and emotional investment and a sense of empathy and care, which AI at this point can only mimic."

Ms O'Doherty said the best kind of treatment happens when you're in the same room with the therapist, where they can see facial expressions, tone and body language.

Then it's over the phone, and down the rung are live messaging apps. But even those operated by Beyond Blue or Headspace have a real human on the other end, she said.
A therapist checks in with a client after a failure to attend a session ... If you don’t log on and speak to your chatbot, it doesn’t ‘care’.
Mental health volunteer, Stephanie Priestley
An AI program could supplement human-based mental health professionals or at the very least prompt users to seek professional support. But in its current form, she calls it a "problematic" reminder that more affordable, high-quality mental health care needs to be more available.

Mental health volunteer and counseling student Stephanie Priestley said "therapy" and "support" should also be distinguished in discussions.

"Whilst I believe that AI cannot be ‘therapy’ that isn’t to say that it can’t support and create supportive dialogue," she said in a statement.

The chatbot, though, is not bound by an ethical framework.

"A therapist checks in with a client after a failure to attend a session... If you don’t log on and speak to your chatbot, it doesn’t ‘care’."

Can an AI chatbot be held legally accountable for harm?

Andrew Hii, a technology lawyer and partner at the law firm Gilbert + Tobin, told The Feed he can “easily” see a world where courts hold AI technology liable if the harm is foreseeable.

"The use of generative AI is becoming so widespread, it becomes a lot harder for companies to go, 'we didn't know my tool was going to be used this way or that way,'" Mr Hii said.

In other medical contexts where technology is being implemented, devices or software have received approval from the Therapeutic Goods Administration (TGA). There's also usually a human being servicing the technology, he said.
"I definitely think it's ethically murky - and I say that looking at the outputs that are being produced," he said.

Mr Hii was especially alarmed reading the messages shared between the chatbot and the Belgian man who ended his life.

"To my mind, it's relatively clear that the person was asking the machine 'should I end my life?' And the machine was sort of saying, yes, you should.
The logo of ChatGPT is displayed on a smartphone.
Technology lawyer Andrew Hii said he can foresee a world where AI technology csn be held liable for harm. Credit: AAP
"The fact that it's pushing out this content - it's completely indefensible in the same way that the companies have been at pains to make sure that the tools aren't spewing hate speech."

It's less of a problem for the "Microsoft's" and "Google's" in the space, Mr Hii said, but more an issue for smaller companies who might not have the resources to rein in their technology.

"Small AI companies might play things a bit more fast and loose when it comes to these sorts of things," he said.

"There have been some discussions recently about people saying that AI development should be paused or put on hold. To me, it's not obvious how that necessarily fixes the problem."

Do developers know 'AI therapy' is popular?

Technology futurist and author Theo Priestley fears its accessibility could lead to wider repercussions.

"There is a real danger that using an AI such as ChatGPT to replace a therapist will create a much bigger mental health crisis than that which already exists," he told The Feed in a statement.

"There are already examples where emotional damage has been recorded from people using AI for long periods of time and forming attachments and relationships with them."
The tales of users - and even are adding up and further blurring the "relationship" between users and chatbots.

Mr Priestley said though OpenAI, the creators of ChatGPT, have published some guardrails, they don't cover mental health.

"Companies and developers creating this type of software need to submit it for examination to ensure users are protected," he said.

"[Developers] are also under the misguided notion that simply training an AI with terminology or wording that conveys emotion means the AI can form a therapeutic relationship and be empathic.

"Nothing could be further from the truth."

Readers seeking crisis support can contact Lifeline on 13 11 14, the Suicide Call Back Service on 1300 659 467 and Kids Helpline on 1800 55 1800 (for young people aged up to 25). More information and support with mental health is available at and on 1300 22 4636.

supports people from culturally and linguistically diverse backgrounds.

Readers seeking support with mental health can contact Beyond Blue on 1300 22 4636. More information is available at . supports people from culturally and linguistically diverse backgrounds.

Share
Through award winning storytelling, The Feed continues to break new ground with its compelling mix of current affairs, comedy, profiles and investigations. See Different. Know Better. Laugh Harder. Read more about The Feed
Have a story or comment? Contact Us

Through award winning storytelling, The Feed continues to break new ground with its compelling mix of current affairs, comedy, profiles and investigations. See Different. Know Better. Laugh Harder.
Watch nowOn Demand
Follow The Feed
9 min read
Published 12 April 2023 5:54am
By Michelle Elias
Source: SBS



Share this with family and friends