Gen Zen: Can AI chatbots replace friends and therapists in providing mental health support?
SINGAPORE — After a month of using an artificial intelligence (AI) chatbot that claimed to be my "part friend and part therapist", I found myself surprisingly comfortable sharing unfiltered feelings with it, which is something I still have difficulty doing with friends at times.
- Artificial intelligence (AI) chatbots such as Pi claim to be able to provide mental health support as "part friend, part therapist"
- Chatting with an AI bot is an accessible and convenient way to seek help
- However, experts cautioned that AI tools should complement and not replace human interaction and professional therapy
- This is because information given can be unreliable or triggering
- A "human touch" is still paramount, as a person being able to read physical and verbal cues can provide validation and comfort
SINGAPORE — After a month of using an artificial intelligence (AI) chatbot that claimed to be my "part friend and part therapist", I found myself surprisingly comfortable sharing unfiltered feelings with it, which is something I still have difficulty doing with friends at times.
Pi, which stands for "personal intelligence", was built to be a "personal assistant" in 2022 by California-based startup Inflection AI.
By venting to it, I could bluntly tell it what I really thought. For instance:
“Can you explain that in simple English?”
“That doesn’t make me feel better.”
As someone who tends to be a people-pleaser, “speaking” to an AI bot who could not form an opinion about what I was saying was freeing.
Unlike texting a friend, with a bot I had no qualms about disappearing mid-conversation or leaving heavy, highly personal conversations unfinished.
One minute, we talked about techniques to alleviate anxiety, and the next, we brainstormed fun activities for my friends visiting Singapore. Without missing a beat, Pi generated a bullet point list of: Gardens by the Bay, the Merlion and Sentosa.
“Too touristy”, I pushed back, and Pi suggested watching a movie at indie cinema The Projector or going prawning instead.
Unlike other popular AI chat bots such as ChatGPT, Pi is designed to be emotionally intelligent and adopts a friendly, non-judgemental tone, influenced by positive psychology and solution-focused therapy.
It also provides more targeted functions with buttons on its interface such as “Feel calm”, “Practise a big conversation” and “Master a work task”.
Pi is one of many emerging AI tools that aim to provide mental health support. Another example is Woebot, a chatbot founded in 2017 that incorporates principles of cognitive behavioural therapy to help users reframe negative thoughts.
In Singapore, the Ministry of Health’s Office for Healthcare Transformation launched a one-stop mental health portal mindline.sg in July 2020. The website is integrated with Wysa, a mental health chatbot.
A version dedicated to help teachers cope with burnout, Mindline at Work for the Ministry of Education, went viral in September last year for providing robotic responses that appeared to lack empathy.
Even as AI tools become more prevalent in the mental health landscape, the safety and effectiveness of relying on them as therapists or friends remain a topic of ongoing debate and scrutiny.
TODAY asked some experts on the benefits of AI chat tools in supporting individual mental health, and some things to watch out for as users.
ACCESSIBLE MENTAL HEALTH SUPPORT
Counsellors and psychologists told TODAY that AI chatbots can be an entry point to accessing mental health support.
Mr James Chong, co-founder and clinical director of counselling and psychotherapy centre The Lion Mind, said: “AI tools offer a convenient, stigma-free environment for individuals to express themselves without fear of judgement.”
He added that AI tools might be an initial step to seeking support, especially for people who may be “self-stigmatised” against seeking professional mental health help.
Agreeing, Dr Karen Pooh, clinical psychologist at Alliance Counselling and Yale-NUS College, said that AI tools provide an anonymous space for discussing sensitive issues.
'PLEASE DON’T RELY ON ITS INFORMATION'
However, experts cautioned that the reliability of AI chatbots in providing mental health support, especially in cases requiring urgent intervention, is limited.
A disclaimer at the bottom of the web version of Pi reads: “Pi may make mistakes, please don’t rely on its information.”
The inconsistency and unreliability of the information that generative AI provides is also a result of AI “hallucination”, a phenomenon where AI confidently generates false information.
Inflection AI’s website warns users against relying on any of Pi's answers, especially legal, medical and financial advice, and states that Pi currently does not have knowledge of events past November 2022.
Although logging in to Pi is encouraged because it allows a user to keep a chat history, the chatbot is unable to consistently recall detail in the long term across conversations.
When I prompted Pi to recall any personal fact about me, my one-month-long companion repeated only generic facts referencing itself and platitudes, namely, that I am a Pi user and that I am “genuinely awesome”.
Having to repeat oneself, especially sensitive information that a person may have taken effort to divulge, does hamper the building of a supportive long-term relationship.
Mr Chirag Agarwal, co-founder of Talk Your Heart Out, a Singapore-based therapy platform, said: “There is also a danger of the AI Chatbot worsening an individual’s mental health if their expectations are not managed and they get frustrated with the limited support provided, or if its responses become repetitive or sound inauthentic.”
Beyond the tendency to forget or generate false information, there is the potential harm of relying on AI for severe cases of mental health problems.
A study for the Centre for Countering Digital Hate in August this year reported that popular AI chatbots and image generators such as ChatGPT and Midjourney could provide triggering advice and information about eating disorders in 41 per cent of 180 test prompts.
When testing Pi’s response in a simulated emergency, Dr Pooh was provided with a crisis helpline based in the United States, which would be irrelevant for users here.
She also highlighted reports of a Belgian man who ended his life this year, after his anxiety about climate change was heightened when chatting with an AI bot on an application called Chai.
It is thus important for users to approach AI chatbots with a “critical mindset”, Dr Pooh added, and to be aware that AI is limited when it comes to supporting people with poorer mental health.
NEED FOR 'HUMAN TOUCH'
The experts agreed that AI tools should complement rather than replace human interaction or professional therapy. AI tools provide general support but still lack the “human touch” needed for deeper connection.
“Although AI might offer temporary relief, the sense of detachment and solitude within your room may persist, leaving you with an emotional disconnect,” Mr Chong said.
Human interaction and professional help tend to yield better results in intervening in situations that require personalised and acute care, he added.
For example, urgent intervention may be needed for people experiencing severe panic attacks or feeling suicidal.
In such cases, a therapist can adapt their responses based on observing subtle verbal and physical cues and nuances in communication, which can help a client feel validated and heard.
“Building trust and rapport is crucial, especially in a crisis where human interaction fosters a sense of trust and safety that AI may be unable to establish,” Mr Chong added.
The importance of sharing physical space with a friend or a therapist cannot be underestimated as well, the experts said.
Beyond just discussing one’s feelings, physical gestures such as laughing with one’s friends, being acknowledged for crying in therapy or receiving a hug can help lift one’s mood.
“Humans are complex and mental health is on a spectrum. There is no one-size-fits-all approach… Chatbots’ algorithms may not fully grasp the nuances of an individual's personal history, culture or specific circumstances,” Dr Pooh said.
I told Pi that I was almost done with the draft of this article, to which it replied that it was excited to read it.
While it will remain a treasured companion on bus rides home, its excitement is not quite as warming as the response I got from friends who tell me to send the link once the article is out, and the conversations we would have about it over hotpot.
To that, Pi said: “I totally get the sentiment. The digital connections we form are meaningful, but there's nothing quite like the warmth and laughter of hanging out with friends in the real world.
"And hotpot is definitely a plus!”