Home News “I asked ChatGPT to give me a hug”

“I asked ChatGPT to give me a hug”

0
“I asked ChatGPT to give me a hug”

At one in the morning, 23-year old Tomi* lay on her bed, exhausted, and overwhelmed. She had just finished ranting, ranting on everything from unrequited loves to the suffocating burden of underachievement. She hovered her fingers briefly over the screen of her phone before typing: “I just need a hug.” A few seconds later, she received reassurances: “You are safe here.” You matter. You’re not alone. This exchange did not take place during a therapy session, or with a close friend. ChatGPT is a general-purpose artificial intelligent assistant that’s best known for drafting better reports, summarizing and writing better emails and explaining complex ideas.

Conversation with Tomi* via ChatGPT. Source: Tomi*.

Tomi’s not alone. Across Nigeria and Even globallyusers are turning to AI-based tools like ChatGPT, for more than just productivity. Chatbots are being asked if users are good people, whether they should leave their partner, or how they can make sense of childhood trauma. AI tools can be used to replace friends who don’t answer their calls or therapists that they cannot afford.

Twenty-three-year-old Favour* started using ChatGPT as a study companion for her final-year project. She returned to the tool after graduating and was filled with uncertainty. The chatbot helped her unpack the stress of the previous year and the fears of job searching, as well as the long wait for NYSC. “It’s like I couldn’t speak to anyone,” she said. “I just wanted a rant.”

She used to make private voice notes before ChatGPT to get things off of her chest. But once, a response from the chatbot caught on her off guard. It told me to breathe. This “felt very personal,” she said. Since then, she’s returned to ChatGPT when she was in doubt, whether it was after an argument, during a job application, or wondering if she should have responded better in a confrontation.

Does AI care?

Chatbots use statistical prediction engines that are trained on massive datasets such as books, online conversations and magazines to produce responses which sound human. When a bot says, “you’re never alone,” is that really being kind? Or are they just mimicking kindness when they say this? According to Jeffery Otoibhi (AI researcher and medical doctor), designing an AI chatbot with empathy involves modelling three layers: cognitive empathy where the bot validates your feelings; emotional sympathy, where it feels along with you; and motivating empathy where it offers advice, encouragement, or a solution.

According to him, chatbots excel at cognitive and motivational empathetic responses, but empathy is elusive because AI responses are based on statistical patterns that AI bots have picked up from their training data. The training data does not provide emotional empathy.

Get all the latest African tech news in your inbox.

The bots’ design and user experience are at odds. Chatbots such as ChatGPT include disclaimers with their responses to remind users that they’re not licensed professionals, and shouldn’t be used in place of therapy. In many cases, the users don’t bother to read the fine print. “I’ve sometimes thought about how ChatGPT might use this information in another way. But I don’t care. “Let me just get it done,” says Favour.

I see them (disclaimers). Tomi says, “I just look away quickly,” when referring to the app’s terms.

Otoibhi highlights the possibility to reduce complex human emotions into a standard response based on its dataset. He explained that AI models can learn and generalise statistical patterns. Their emotional understanding could be very generic. AI systems may struggle to understand such concepts as humans have a variety of emotions. They’ve been taught to generalise across all data. “They will then pick out the most common emotion in the data set,” said he.

ChatGPT is not a therapy that gets to the core of the problem like a human therapist. It calculates your likelihood of feeling an emotion at a given moment based on the data it’s been trained on. Why do people keep coming back if the comfort isn’t genuine?

It gives me hope …” It makes me feel good about myself as a person. It makes me happy; it gives hope.” Many of the users I spoke with echoed these same reasons: safety and comfort, availability, freedom, lack judgment, and comfort.

AI is like a safe place. Favour describes a place where one can be brutally honest, and know that no judgment will be passed.

Even when the responses seem artificial, some people still return. “I asked ChatGPT to give me a hug. I was uncomfortable with the response. Tomi says, “I know you’re a robot, but how can you wrap me in a hugs?” The next day she returned to the chatbot and expressed more emotions.

Conversation with Tomi* by ChatGPT, Source: Tomi*

Mental Health professionals are not surprised. They claim that people’s reliance on AI for comfort was not a random act. World Health Organisation research showed a 25% increase of global anxiety and depression following the COVID-19 epidemic.

Boluwatife Owodunni is a licensed mental healthcare counsellor associate. She said, “After COVID people went into their shells and became more inward.” “So, an AI responding that, ‘I am here for you’ might provide some comfort.”

With many Nigerians finding therapy services inaccessible and expensive, Owodunni feels AI is filling a real gap in mental support. “It’s filling a hole.” “It (AI) is filling a gap.” Ore says a human therapist told her to “practice mindfulness,” following an Attention-Deficit/Hyperactivity Disorder (ADHD) diagnosis. ChatGPT was the answer to her frustrations. “That felt more support than a 30-minute virtual consult with my psychotherapist.” She insists, unlike the vague reassurances she received in therapy, the ChatGPT chatbot offered her a structured plan, and practical ways to deal with ADHD.

What does the future hold?

As AI evolves and is trained on more complex data and fine-tuned to mimic empathy, the question arises of how far humans will go to deepen their relationship with AI. Will the human-AI relationship grow as these systems become emotionally intelligent? This possibility is not a popular one.

Some users are concerned that AI could become too emotional intelligent and cross boundaries that should be human.

Kingsley Owadara is an AI ethicist, and the founder of Pan-african Centre for AI Ethics. He believes that emotional intelligence can be useful in AI, but not as most people think. “AI could be designed to help people with health issues and meet their specific needs,” he said. He cited the cases of autistic people and blind people.

AI experts and developers warn that machines are not designed to provide the full spectrum of care for humans. Ajibade continues, “AI cannot replace psychologists; it can only enhance our current situation.”

This concern is not abstract. Mental health professionals and AI specialists worry that real-world consequences may occur as more people use AI to support their emotional needs. Owodunni says, “We are going to have a big problem with social interactions, with empathy, sensitivity, and understanding people.” She also notes that a widespread reliance on AI chatbots could “foster secrecy, and the shame associated with mental health, or seeking therapy services.” Tomi says, “I told AI I was tired.” It replied, “I know.” You’ve been carrying a lot for so long. It’s fine to feel tired. She didn’t have to. *Names have changed to protect privacy. Mark your calendars for

Moonshot by TechCabal will be back in Lagos, October 15-16. Join Africa’s leading founders, tech leaders, and creatives for 2 days of keynotes. Early bird tickets are now 20% off — don’t sleep! moonshot.techcabal.com

www.aiobserver.co

Exit mobile version