Throughout my time as deputy commissioner at the UK’s Information Commissioner’s Office, the consensus view on AI governance and regulation was evolving at a slow and steady pace.
The generative AI (GenAI) revolution caught governments around the world unprepared, triggering divergent policy responses that highlight contrasting regulatory philosophies. The EU has recently shown its hand, while the incoming Trump administration is playing its cards close to its chest. Now, as the UK prepares to unveil its approach, where should it place its bets?
The EU and US: contrasting models of AI regulation
The EU has staked its claim as a leader in AI regulation with its comprehensive AI Act, set to come into force in phases from February 2025. The Act employs a risk-based approach, banning certain uses of AI while imposing stringent obligations on “providers” of high-risk systems—predominantly large tech platforms and players such as OpenAI, Anthropic, and Mistral.
These obligations will be supported by Codes of Practice, the first of which are in early drafts. It is already clear that developing and deploying AI in the EU will require adherence to detailed and prescriptive rules. It remains to be seen whether a ‘Brussels Effect’—where other countries emulate EU policies—will occur here, as it has with privacy legislation. So far, there has been no rush to replicate the EU model elsewhere.
The US position is far less clear. The previous administration introduced an Executive Order to standardise AI use in public services, but its successor appears poised to revoke it without providing clarity on alternative measures. This doesn’t mean the US will necessarily pursue a ‘free market at all costs’ approach; many members of the incoming administration have voiced support for some intervention. Elon Musk’s well-documented concern that “there is some chance, above zero, that AI will kill us all,” reflects a growing awareness of the stakes. Even Trump himself recently stated that “Big Tech has run wild for years … stifling competition”.
The US approach is likely to prioritise trade and competition considerations over the EU’s focus on human dignity and rights. While the US is unlikely to replicate the EU model, it will likely take a lighter touch overall, addressing AI issues reactively and primarily through the lens of economic and competitive priorities.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe UK opportunity: a pragmatic middle ground
So, where does this leave the UK? For once, there is a genuine opportunity to chart a third way, striking a balance between the EU’s comprehensive approach and the likely volatility of the US.
The initial signs suggest that the current UK Government sees AI primarily as a driver of economic growth and public sector transformation, focusing on the upsides of innovation rather than the downside risks that have preoccupied EU policymakers.
The government is expected to announce its plans just before or after Christmas. Based on public pronouncements, it seems likely that the UK will aim to occupy a middle ground—avoiding the rigidity of the EU framework while maintaining a stable regulatory environment.
This may include frontier AI Safety: elevating the AI Safety Institute to a statutory role could position the UK as a global leader, particularly if Trump follows through on his plans to close its US counterpart. An alternative is integration with existing regulations: avoiding AI-specific regulation for most activities, the government may instead require existing regulators to identify when AI alters the risks they oversee and respond accordingly.
It could also include dynamic oversight which would acknowledge the rapid pace of AI development with the UK possibly adopting a monitoring-first approach. This potentially involves bodies such as the Regulatory Innovation Office to reduce barriers and accelerate innovation. Coordination among regulators—perhaps via the Digital Regulation Cooperation Forum—could also play a key role.
Striking the right balance
This approach would make sense. AI is an exciting technology with the potential to revolutionise industries, but it is not an isolated phenomenon. In most cases where AI poses risks to people or society, existing regulations already address these areas. For example, we have rules governing data use in credit decisions, immigration, and recruitment—areas considered high-risk under the EU AI Act. Adding AI-specific rules could create confusion and potentially slow regulatory responses. It is better to let existing regulators and regulations handle these challenges, provided they are given the necessary resources.
That said, I am keen to see how the government proposes to help regulators develop the competence and capacity needed to take on additional duties. The previous government supported a similar position but offered little detail on how regulators would manage a rapidly evolving risk landscape without additional funding.
In the race to regulate AI, the UK’s “third way” holds significant promise. However, like the technology itself, it will require constant adaptation to remain effective. By focusing on targeted oversight and dynamic regulation, the UK can position itself as a global leader in AI governance—provided it is prepared to invest in the institutions and expertise needed to rise to the challenge.