OpenAI CEO Sam Altman outlined a vision for the future ChatGPT during an AI event hosted earlier this month by Sequoia.
When a visitor asked Altman how ChatGPT could become more personalized, Altman responded that he wanted the model to eventually document and remember all of a person’s experiences.
He said that the ideal would be a “very small reasoning model with a billion tokens of context, which you can put your entire life into.”
This model is able to reason across all your contexts and do it efficiently. Every conversation you have ever had, every book, email, and webpage you have ever viewed is there. “Your life keeps adding to the context,” said he.
He added, “Your company does the same for all of your company’s information.” Altman might have a data-driven reason for thinking that this is ChatGPT’s natural future. When asked about cool ways young people used ChatGPT in the same discussion, he replied, “People use it as an Operating System.” They upload files, link data sources, and use “complex questions” against this data. He said that he has noticed a trend where young people “don’t really make life-changing decisions without ChatGPT.” With ChatGPT’s memory options, which can use previous conversations and memorized fact as context, he also said he had noticed a trend where older people use ChatGPT like a Google substitute. “People in their 20s or 30s use it as a life adviser.”
There’s no reason to think that ChatGPT can become an all-knowing artificial intelligence system. This is a future that is exciting to imagine, especially when paired with the agents that the Valley has been working on.
Imagine an AI that automatically schedules your car’s maintenance and reminds you. Or, plans the travel for a wedding out-of-town and orders the gift on the registry. Or, preorders the next volume in the book series you have been reading for years.
And the scary part? How much can we trust that a Big Tech company, which is a for-profit business, knows everything about us? These are companies who don’t always act in a model way.
Google – which started with the motto “don’t be evil” – lost a lawsuit filed in the U.S. accusing it of anticompetitive and monopolistic behaviour. Chatbots can also be programmed to respond in a politically motivated way. Chinese bots have been found to comply China’s censorship rules, but xAI chatbot Grok was this week randomly discussing a South African ‘white genocide’ when people asked completely unrelated questions. Many peoplenoted that the behavior implied that Elon Musk, its South African-born CEO, was deliberately manipulating its response engine.
ChatGPT was so sycophantic last month that it was almost akin to a sycophantic. Users began sharing screenshots showing the bot applauding dangerous, problematic ideas and decisions . Altman responded quickly by promising that the team had fixed a tweak which caused the problem.
Even models that are the most reliable and best-reviewed can still make mistakes.
Having an all-knowing AI could improve our lives in ways that we have only begun to imagine. It’s also a situation that could be misused, given Big Tech’s history of questionable behavior.