Google is integrating more AI into the way internet search works. Remember AI Overviews? This was a summary of content from websites that appeared at the top Google Search page.
This error-prone feature has been removed Expanding on the US market powered by the new Gemini AI models. It is no longer required to sign in with a Google Account, and is now open to all users. Google has now introduced a new AI mode that will give a similar treatment to the entire Search page.
AI Mode is currently available as a Lab test. It transforms the traditional Google Search with website links into a dialogue, similar to how AI chatbots provide answers. If the history of AI Overviews tells us anything, it’s a delightful convenience that could be dramatically wrong.
What is AI Mode in Google Search?
) The idea is to provide users with all the information they require — pulled from websites indexed — and to save them the trouble of clicking on sources and browsing through pages to find the answers. You can ask for more information in natural language, rather than a keyword-filled search. You can even include the details that would require several follow-up searches.
The company explains that it uses a “query-fan-out” method, which allows users to simultaneously perform multiple searches across multiple subtopics and data sources. It then combines the results into a simple response. Google warns, however, that AI mode may not always work, despite its encouraging internal tests. In situations where the AI Mode does not feel confident about the summarized answer, it will display a list of search results like the traditional Google Search.
In its current avatar, it can provide answers as a wall of text or neatly formatted tables, but down the road, images and videos will also be included. AI mode is currently available only for Google One AI Premium subscribers, and will roll out as an opt-in experience.
It’s a bad omen for any person reliant on Google Search, especially if we are talking about accuracy. Here’s an example. I looked up whether we are living in the year 2025. Google’s AI Overview said it is the year 2024. The first source it cited for that information was Wikipedia, which explicitly says the current year is 2025.
A rich history of risks
The idea behind AI Mode for Google Search is theoretically rooted in user convenience. However, the fundamental tech stack behind it is still dealing with a few problems that the entire AI landscape is yet to fix. One of them is AI hallucinations, which is essentially an AI tool making up information and confidently presenting it as a fact.
Google’s AI Overviews are the best example of those missteps, and the mistakes continue to pop up to this day. Take for example this evidencewhich was shared merely a few hours ago on Reddit, in which the AI Overview confidently lied about a right-side driving rule in India.
That’s a false statement, and yet, at no point, the language of the AI Overview text suggests that the user should fact-check this information. “It’s so inaccurate and so buggy that I’m surprised it even exists,” says another reportdetailing its sheer inaccuracy.
AI Overviews only appear as a condensed nugget of information, served at the top of the Google Search page. Now, imagine a whole page that is presented to users as a long presentation, with a few source links interspersed through the wall of text.
Google says AI Overviews will excel at “coding, advanced math and multimodal queries.” Yet, not too long ago, it fumbled facts and turned history on its head, especially with the kind of natural language queries that are being hyped for AI mode.
When asked whether astronauts met cats on the moon, it confidently agreed that it was true, adding that astronauts even took care of those lunar cats. Virginia Tech digital literacy expert, Julia Feerrar, remarked that AI doesn’t actually know the answers to our questions, citing an example where Google AI overview confidently mentioned Barack Obama as the first Muslim President.
https://t.co/W09ssjvOkJ pic.twitter.com/6ALCbz6EjK
— SG-r01 (@heavenrend) May 22, 2024
The consequences of AI misinformation could be disastrous, especially when it comes to health and wellness-related queries. In an analysis of over 30 million Search Engine Results Pages (SERPs), Serpstat found that health-related search is the most popular category where AI Overviews appear.
This is the same tool that suggested a person should eat at least one rock per day, adding one-eighth cup of glue to pizza, drinking urine to pass out kidney stones, and mentioned that a baby elephant can fit in a human palm as of 2025.
This is not the Search evolution I seek
Despite Google’s assertions about how the AI models have evolved, the situation hasn’t improved dramatically. Less than a day ago, Futurismspotted AI Overviews confidently claiming that MJ Lenderman has won 14 Grammy awards.
It even got the year wrong, when I asked something as simple as “is it 2025” in the Google Search box. ”No, it is not currently the year 2025. The current year is 2024,” said the AI Overview.
Going a step further, it explained how 2025 is a common year that starts on a Wednesday, adding a bunch of non-related information discussing everything from national celebrations to UN declarations that have absolutely nothing to do with my query.
Now, I am not entirely against AI. On the contrary, I extensively use tools like Gemini Deep Research, and often rely on the latest Gemini 2.0 Flash AI model for creative ideas when my brain cells are not firing off at peak capacity.
However, pushing an error-prone AI overhaul to a source of information as indispensable as Google Search is a risky proposition. Digital Trends has reached out to Google and will update this story once we get a response.