This is directly motivated by reading about the Browser Company’s pivot to Isa chatbot-centric browser, but there are many similar cases.
I feel like this is obvious but I’m not hearing it shouted from the rooftops so here it is: adding an untrusted middleman to your information diet and all of your personal communications will eventually become a disaster that will be obvious in hindsight.
You ask OpenAI for a product recommendation, and it recommends a product that they’re associated with, or one that a company is paying them to promote. Or maybe some company detects OpenAI’s web scraper and delivers customized content to win the recommendation. You just don’t know.
This is obviously going to happen. Google promoted its own products in search. Amazon recommends its own productseagerly ripping off the branding and terms used by other companies. Microsoft promotes its own AI, Copilot, when you use Microsoft’s search engine, Bing, to search for Google’s AI, Gemini. This kind of stuff is not illegal enough to attract enforcement in the US and it’s obviously good for business, so companies do it with gusto, even when it’s totally obvious to everyone.
This is just the ‘economic crimes’ part of the equation, because manipulation there shows up in lawsuits. What about the ideological manipulation? There’s plenty of pre-AI evidence for that too: Careless People, the Facebook tell-all book, is chock full of examples of insiders turning the dial to promote some people or silence others on the platform. AI will be this, just harder to detect and more efficient.
When it comes down to it, the chatbot doesn’t work for you. It works for its maker and it is not responsible for anything it does.
Becoming dependent on the chatbot is like becoming dependent on a butler for all of your news and communications: convenient at first, but eventually you’re going to get gaslit or snuffed out with a pillow.