Google is working on a fix for Glum Gemini stuck in an ‘infinite loop’ of self-esteem issues

A Google AI leader said that Gemini’s sad, self-deprecating messages are a glitch the company plans to fix.

Perhaps Google Gemini should take some PTO.

Recently, the company’s large-language AI model, which is spreading across Google’s services and products in an increasing number, has said some things that have caused users to worry. Does Gemini lack self-esteem?

A series on social media displaying some of the selfcritical responses Gemini gave users reveals an alarming pattern. In one screenshot,Gemini admits that it cannot solve a coding issue and concludes “I have failed. You should not have to deal with this level of incompetence. I am truly and deeply sorry for this entire disaster. Goodbye.”

“Gemini is not OK,” The X account @AISafetyMemes Post in June

Watch this: Google May Have Solved the Biggest Problem with Voice Assistants

A post dated Aug. 7 Logan Kilpatrick, a member of the Google DeepMind Team, was able to respondafter receiving the troubling posts

. On X he replied. “This is an annoying infinite looping but we are working to fix! Gemini is not having that bad of a day : )”

we asked a Google representative if the AI model had been having a string of bad days but have not heard back.

The challenges of AI personality

Google is not the only large tech company dealing with moody AI products. OpenAI announced in April that it was tweaking ChatGPT so it would be less sycophantic. Users had noticed the chatbot was being too generous with its praise.

It’s hard work to create an AI persona that is palatable for the masses, and it’s like building a “carefully crafted illusion,” according to Koustuva professor of computer sciences at the University of Illinois Grainger College of Engineering. Saha said. “The challenge lies in making that persona consistent across millions of interactions while avoiding undesirable drift or glitches.”

Companies creating AI want their tools to be friendly and conversational, so that people don’t realize they are talking to a computer. But any humor or warmth that it displays is simply the way it was engineered.

Saha explains that research at Grainger has shown “we found that while AI can sound more articulate and personalized in individual exchanges, it often repurposes similar responses across different questions, lacking the diversity and nuance that comes from real human experience.”

That when things go wrong, such as Gemini’s recent emo teen phase, Saha says “glitches such as Gemini’s self-flagellating remarks can risk misleading people into thinking the AI is sentient or emotionally unstable,” . “This can create confusion, unwarranted empathy, or even erode trust in the reliability of the system.”

This may seem funny, but it can be dangerous for people who rely on AI assistants to help with their mental health or for customer service or education. Users should be aware that AI services have limitations before becoming too reliant.

In regards to Gemini’s low self-esteem, let’s hope that the AI learns how to take care of itself — or whatever passes as a spa day within computer code.

www.aiobserver.co

More from this stream

Recomended