Figure 1: Three decades after the promised “digital transformation” of government (19659006)
Risks and Opportunities
In the midst of a sea of grim news, and growing policy challenges, a potential bright spot is emerging – namely, the emergence and development of a new AI generation that has the potential to support and improve work in the public sector. They claim that these systems will soon replace or radically enhance most types of knowledge-based work.
This pitch to the government is simple. Artificial intelligence is the ace in a pack of bad cards. It is a way to avoid looming labour shortages, and more importantly, it allows governments to maintain or expand vital services, despite budgetary constraints and a shrinking work force.
The correct response to techno-solutionist optimism, however, is a cautious “maybe”. The introduction of advanced technology will not, by itself, bring about reform in the public sector.
Government agencies and departments still operate according to principles developed during the heavy industry era. The problem-solving methods embedded in the core of state policymaking, decision making, and administration do not align well with modern technology and practices.
It’s possible that AI could contribute to the transformation of the public service, but it won’t happen without a complete overhaul of government culture and organisation. Otherwise, it could become just another in a series of technological fixes that promise but fail to deliver.
The UK government has been trying to modernise their operations using digital tools and practices for over 30 years. These “digital transformation” projects have not reached their full potential because governments haven’t modernised their operating models and structures. They continue to move in a linear, traditional fashion.
In order to gain value from AI, the state needs to move away from its paper-era top-down planning. Governments must adopt a more iterative, effective approach to policymaking, embracing best digital practices and experimenting, learning and adapting.
The challenges of adopting AI in the public sector mirror well-known issues that have been experienced with older technologies, like web and mobile. Instead of a genuine transformation, the government has simply replicated their existing organisations, processes, transactions, and systems online, missing opportunities to rethink, design, deliver, and continuously improve how policies and public administration is conceived, designed and delivered.
Before any real-world validation, manifestos shape legislation and hierarchical structure stifles experimentation that is essential for true transformation.
The government remains structured around a model which has barely changed since Henry Ford invented the linear assembly line at Detroit just before World War One. Everything is done in steps. The routine followed by governments is reminiscent of the production lines from the Ford era. Each team completes a predefined job before moving on to the next, leaving no room for re-evaluating earlier assumptions as an organisation learns.
The manifestos are broad statements of policy, often influenced either by the “think tank” of a particular political party or the most recent attention-grabbing headlines in tabloids. These policies are then turned into legislation by departmental policy experts, before being passed on to the operational and commercial teams.
The first technologist may not be involved for months or years, and it can take even longer before a policy has been ready for public testing. This rigid “waterfall” progress stifles “learning by doing”which is standard in successful digital organizations.
Digital iteration model.
Organizations in the digital world are structured differently. They implement a solution as quickly as possible and then iterate to improve it, based on the feedback and interactions of users. Instead of trying predict the outcome from the beginning, they experiment to learn from real-world experiences. Unfortunately, this iterative method clashes with government policymaking’s rigid, top-down approach. While digital organisations rely heavily on continuous testing, feedback, and learning based on outcomes, most public institutions are still bound by linear processes that put policy first. Manifestos influence legislation before real-world validation, and hierarchical structure stifles experimentation that is essential for true transformation.
With these two radically different models, it is no wonder that digital transformation programmes in the last three decades have made such little progress.
Fundamental Conflict
What is the fundamental difference between the two models? It’s all about the illusion of predicability.
The shared belief of a politician or a policymaker is that the outcome of design decisions can easily be predicted in advance. Manifestos are rarely based on hypotheses or shades-of-gray. They assume that reality is a stable, mechanical one, which can be manipulated by a top-down “solution” often ideological, agreed in advance.
The logic of the electoral process is a part of this mindset. In theory, parties ask their voters to accept policies in the abstract prior to delivering practical outcomes.
In contrast, the digital world operates on William Goldmanโs principle, “Nobody Knows Anything”. Digital organisations rely heavily on testing and feedback because no one can predict what will work in the real world. Insights from real-world user experiences allow digital organisations to continually improve their products and service, and fine tune their internal organisational structure, operations, or processes.
It’s no wonder that attempts to introduce iterative thinking in the state’s linear method have failed repeatedly. The basic assumptions of the two systems are fundamentally different.
The blockers to AI adoption
Why does this long-standing mismatch affect the adoption AI in government? Because it further amplifies the conflict between old school predict-and-control and the newer model of experiment-and-learn.
Because AI outputs and the user behaviours which shape them are probabilistic and unpredictable, developers cannot specify outcomes in a single-shot plan. They must collect real-world feedback, refine model prompts or training data and course-correct based on how users interact.
The iterative approach is the opposite of attempting to predefine all aspects in a manifesto. It relies more on testing and adjusting in real time than adopting a solution dogmatically at the beginning.
For policymakers and delivery teams to benefit from this technology they must adopt an iterative approach based on evidence. They must abandon the discredited notion that they can specify the final outcomes from the beginning, even before the “learning by doing” process has begun.
Missed opportunities in policymaking
Governments linear approach to policymaking locks in questionable assumption and constraints before policy even touches the real world.
This approach creates missed opportunities. Generalist politicians and officials are often unaware of the technology that could be used to design and deliver better policy outcomes.
Worse still, the state’s linear mentality amplifies risks. Unintended consequences are often only realized much later. By then, policies are already set in stone, making it difficult to change the strategy to mitigate new harms.
It is understandable that governments are concerned about ensuring fairness and equality when adopting new technologies. This can be best managed by a “learning-by-doing” approach, which embeds legal and ethics review and feedback loops for users into every stage of process.
What it means
The top-down, departmental, project-based procurement approach of governments exacerbates the problems. The funds are allocated to a silo-based effort. But technologies such as AI require ongoing investment and tuning. Every initiative must navigate a never-ending stream of new and improved models, with ever-evolving abilities. There’s no such thing “job done” with these rapid improvement cycles.
In summary, the machinery of democratic governments – manifestos, top-down policies, one-off budgets and procurement based on department and project, and long implementation cycles that only engage technical expertise downstream – are fundamentally mismatched to the technologies remaking the world.
State’s mindset is still anchored in the age of heavy industries and linear process automation, rather than transformation and change. Synthetic intelligence is transforming an unreformed nation into a future which, until recently, was only seen in science fiction.
The good news is that technology companies and democratic governments have at least one thing they share: They both want to find out what people need and provide it to them as quickly and effectively as possible (Figure 2. ).
Figure 2: The fusion between policymak ing and digital practices (19659045) The real value of AI is not in automating the bureaucracy of yesterday, but rather in reimagining the policymaking process and democratising it from the ground up.
How governments should react
If governments want to harness AIโs potential to tackle their social and economic problems, they must:
Technology doesn’t exist in isolation. Each new wave has its own organisational implications and social implications. If Western governments want to deliver the reforms that they have promised us since the 1990s they must embrace technology as the core of a structural transformation of government.
Linear sequential methods were revolutionary for manufacturing in 1913 but are not suited to a digital age powered by technologies such as AI. It’s time for us to move away from the assembly line mentality of a century ago and adopt modern iterative processes on every level.
AI’s biggest contribution will be to trigger the long-promised overhaul in government structure and operations. It could be the catalyst that helps governments achieve the “digital transformation” state they have long promised by forcing them to face and discard their industrial-era assumptions.