Cognitive Migration is underway. The station is crowded. Some have boarded, while others hesitate and are unsure if the destination is worth the trip.
Future-of-work expert and Harvard University professor Christopher Stanton Commented that the uptake and adoption of AI is tremendous, and that it’s an “extraordinarily rapid-diffusing” technology. This is what separates the AI revolution from other technology-led transformations like the PC and internet. Demis Hassabis is the CEO of Google DeepMind. Intelligence or at least the ability to think is increasingly shared by humans and machines. Some people are now using AI in their daily workflows. Others have taken it a step further and integrated AI into their creative identities and cognitive routines. The “willing” include consultants who are fluent in prompt design and product managers who retool systems, as well as those building their businesses doing everything from coding to design to marketing.
The terrain is new, but attainable for them. Exciting, even. For many, this moment is strange and unsettling. They are not only at risk of being left behind. It’s not just being left behind. It’s not knowing when, how and whether to invest in AI. A future that seems uncertain and difficult to imagine is the risk they face. This is the double-risk of AI readiness and it is changing how people perceive the pace, promises, and pressure of the transition.
AI Scaling Has Its Limits.
Power limits, rising token costs, inference delays, and increasing token costs are reshaping enterprise AI. Join our exclusive event to learn how top teams are: https://bit.ly/4mwGngO
Is it real or not?
AI tools are reshaping work processes faster than norms and strategies can keep pace. The significance is still unclear, and the strategies are unclear. The endgame, if one exists, is still uncertain. The pace and scope of the change feel ominous. Everyone is told to adapt but few understand what this means or how far changes will go. Some AI leaders claim that superintelligent robots will be available within a few short years.
However, this AI revolution may fail, just as other previous ones have. The winter will follow. There have been at least two notable winters. The first winter was in the 1970s and was caused by computational limitations. The second winter began in the late 80s, after a wave unmet expectations and under-delivery with “expert systems”. These winters were marked by a cycle high expectations followed by profound disappointment. This led to significant reductions of funding and interest in AI.
If the excitement surrounding AI agents today resembles the failed promise of the expert systems, it could lead to a second winter. There are some major differences between then, and now. There is a much greater institutional buy-in today, as well as consumer traction, and cloud computing infrastructure, compared to expert systems from the 1980s. There is no guarantee a new winter won’t occur, but if this industry fails, it won’t be because of a lack of money or momentum. It will be because reliability and trust broke first.
Cognitive migration is underway
This is the beginning of the journey if “the great cognitive migration” really exists. Some people have already boarded, while others are still unsure whether or not to board. The atmosphere at the station is restless as travelers sense a change in the trip itinerary that has not been announced.
The majority of people have jobs but are concerned about the level of risk they face. The value of their job is changing. Performance reviews and company townhalls are accompanied by a quiet but growing anxiety.
AI can already be utilized. AccelerateSoftware Development by 10 to 100X. Generate the majority of client facing code and compress project deadlines dramatically. AI can now be used by managers to create employee profiles. Performance evaluations Even classicalists and archaeologists found AI valuable, having used it to Understand ancient Latininscriptions.
Those who are “willing” may have a better idea of their future and find some traction. For the “pressured,” “resistant,” and even those who have not yet been touched by AI, the moment feels somewhere between anticipation and grief. These groups are beginning to realize that they may not stay in their comfort zones forever.
This is not only about new tools or a different culture, but also whether there is room for them in that culture. Waiting too long could result in a long-term loss of employment. Even those who have advanced in their careers, and are now using AI, wonder if they are at risk.
The narrative about opportunity and upskilling conceals a more unpleasant truth. This is not a migration for many. It is a managed dislocation. Some workers do not choose to opt out of AI. They are learning that the future they are building does not include themselves. Belief in tools is not the same as belonging to the system that tools are reshaping. Without a clear path for meaningful participation, “adapt or you will be left behind” starts to sound more like a judgment than a piece of advice. These tensions are exactly why this moment is so important. The feeling is growing that the work as they know it is fading. The signals are coming directly from the top. Microsoft CEO Satya Nadella admitted as much in July 2025 Memo after a reduction of force, stating that the transition into the AI era is “might seem messy at times but transformation is always messy.” But this unsettling reality has another layer: the technology driving this urgent change remains fundamentally unreliable.
The power and glitch: Why AI is still not trusted
Despite the urgency, this technology remains glitchy, limited and strangely fragile. This creates a second level of doubt about the tools that we are adapting to, and whether they can deliver. These shortcomings are not surprising, as the output of large language models was barely coherent only a few years ago. On-demand PhDs are now available to anyone. Ambient intelligence was once science fiction that was almost realized.
Underneath the polish, however, chatbots that are built on top of these LLMs still remain fallible and forgetful, as well as often overconfident. They still hallucinate and we can’t trust their output. AI can answer confidently, but it cannot be held accountable. This is probably good, as our expertise and knowledge are still needed. They also have a poor memory and find it difficult to carry a conversation over from one session into another.
These bots can also get confused. I recently had a chat with a popular chatbot. It answered a question in a non-sequitur. It responded off-topic again when I pointed out that it had done so, as if our conversation thread had just vanished.
They do not also learn, at any rate not in a human sense. Once a model has been released, by Google, Anthropic or DeepSeek, the weights of that model are frozen. Its “intelligence’ is fixed. The chatbot’s context window is large enough to allow for a continuous conversation. Chatbots are able to absorb information and make connections within the context window, which allows them to learn in the moment. They appear more and more like savants.
These flaws and gifts add up to a beguiling, intriguing presence. But can we believe it? Surveys like the 2025 Edelman Trust Barometer (19459057) shows that AI trust is divided. In China, 72% express their trust in AI. In the U.S. that number drops to just 32%. This divergence highlights how public trust in AI is shaped by culture and governance just as much as by technical capabilities. We would trust AI more if it did not hallucinate. If it could remember. But trust in AI itself remains elusive. There is widespread concern that AI technology will not be regulated and that the public will have little influence on its development or deployment.
Will this AI revolution fail without trust and bring another winter? What happens to those who invested their time, energy, and careers? Will those who waited to embrace AI benefit from their decision? Will cognitive migration fail?
Some prominent AI researchers have warned AI in its current state — based on deep learning neural network upon which LLMs were built — will fall short optimistic projections. They claim that further technical breakthroughs are needed to take this approach much further. Others don’t believe in the optimistic AI predictions. Ewan Morrison (19459057) is a novelist who views superintelligence as a fiction that has been dangled before readers. Attract investor funding He said that it was a fantasy, a product of venture capitalism gone crazy.
Morrison’s skepticism may be warranted. Even with their shortcomings, LLMs today are already demonstrating a huge commercial utility. Even if the exponential growth of the past few years ends tomorrow, the ripples created by the progress will continue to have an impact on the industry for many years. But underneath this movement is something more fragile: the reliability of the tools themselves.
The gamble and dream
As companies continue to pilot and deploy AI, exponential advancements are continuing. The industry is moving forward, whether it’s o ut of conviction or fear. All could fall apart if a new winter comes, especially if AI agents do not deliver. The prevailing assumption is still that the shortcomings of today will be fixed by better software engineering. They might be. They probably will, to some degree.
We bet that the technology works, will scale and the disruptions it causes will be outweighed with the productivity it allows. The success of this adventure is based on the assumption that what we sacrifice in human nuance and value will be compensated for by efficiency and reach. This is our gamble. Then there is the dream. AI will be a source of abundance that will be widely shared. It will elevate instead of exclude and expand access to knowledge and opportunities, rather than concentrate them. The gap between them is what’s unsettling. We are moving forward, as if this gamble would guarantee the dream. It’s the hope that accelerating will get us to a better place and the faith that this will not erode human elements that make reaching the destination worthwhile. History reminds us, however, that even winning bets may leave others behind. The “messy transformation” that is currently underway is not a side effect. The “messy” transformation underway is not just a side effect. Cognitive migration continues for now, based as much on belief as it does on faith.
It is not enough to build better tools; we must also ask ourselves more difficult questions about their direction. We are not only migrating to a destination unknown; we are doing so fast that we are moving across a landscape which is still being drawn. Every migration is accompanied by hope. Unexplored hope can be dangerous. It’s time to ask, not just where we’re going, but who we will be when we get there. Gary Grossman, EVP Technology Practice at Edelman is the global leader of the Edelman AI Center of Excellence.
Want to impress your boss? VB Daily can help. We provide you with the inside scoop on what companies do with generative AI. From regulatory shifts to practical implementations, we give you the insights you need to maximize ROI.
Read our privacy policy
Thank you for subscribing. Click here to view more VB Newsletters.
An error occured.
