Google uses the theoretical power generated by generative AI to allow Gemini to access data from multiple apps. This can be a very useful feature when it works. You can, for example, ask Gemini to search your email for specific messages, extract the data and pass it on to another app. This functionality was exciting at first, but it has made me miss Assistant’s ability to fail without wasting time.
Ryan Whitwam
This issue was brought to my attention recently when I asked Gemini for a tracking number of a shipment from an email, something I do quite often. The robot seemed to be working fine, citing the right email and spitting a long list of numbers. I didn’t notice anything wrong until I looked up the tracking number. The tracking number didn’t work on Google’s search tracker and the US Postal Service’s website returned an error.
Then it dawned upon me: the tracking number was not a tracking, but a confabulation. It was also believable. The number was the right length and, like all USPS tracking codes, it began with a nine. Gemini’s mistake took a lot longer to find out than I could have done myself. This is frustrating. Gemini seemed confident that it had completed my task, but getting angry at the chatbot would do no good. It can’t understand either my anger or the nature of my initial query.
I would kill to have Assistant’s “Sorry, I don’t understand.”
I can’t even count the number of times Gemini has put the wrong data in a message or added the wrong calendar event. Gemini is usually good at these tasks, but it’s mechanical imagination wanders so often that its usefulness as an assistant becomes suspect. Assistant couldn’t do many things, but didn’t waste time pretending to be able to. Gemini is even more insidious. It claims to have fixed my problem, but in reality, it sends me down a rabbit-hole to fix its mistakes. If a human assistant behaved like this, I’d have to conclude that they were either incompetent or malicious.