
The UK Government has recognised the opportunity AI presents to the economy and has pushed to capitalise on it. The AI Opportunities Action Plan is a statement of intent which steps up efforts to unlock AI’s economic potential.
But as we rush to embrace the future, it’s critical to look at what we can learn from sectors that have also undergone rapid change in the pursuit of innovation and growth.
The UK’s cybersecurity industry for example, is a £12bn industry that has borne some of the country’s best talent and innovation – from hardware-based web isolation technology to AI-driven cybersecurity. But that has only been possible by establishing guidelines and measures that built confidence in the technology from the outset.
To secure its long-term success, AI development can reflect on how cybersecurity has flourished, to understand why the industry has moved from a niche concern to a national priority in a few short years.
So, what can the world of AI learn from cybersecurity’s past?
The evolution of cybersecurity
Initially an afterthought, cybersecurity has become a critical part of the digital world, enabling innovation while embedding safety standards to protect data and systems. As governments consider adopting large-scale AI projects, they must follow a similar trajectory. Just as cybersecurity standards evolve to address emerging threats, AI safety must adapt to its own unique challenges. Without robust safeguards, the potential of AI misuse could erode public trust and stall progress.
The transformation of cybersecurity was driven by the understanding that trust is the foundation of success. Trust is built on transparency and accountability, meaning robust governance of AI systems and responsible data practices are non-negotiable, through standards like ISO 42001. And much like cybersecurity, where data breaches can devastate reputations, irresponsible handling of AI data can have far-reaching consequences.
Smart regulation is key
The UK has an opportunity to drive up standards through the upcoming Cyber Security and Resilience Bill. This represents another opportunity for close dialogue between the domains of AI and cybersecurity. AI will lead to an exponential increase in the volume of data that needs to be protected and expands the armoury of tools for attackers. But it can also augment the capabilities of defenders. To achieve its aims, the Bill will need to respond to the cybersecurity challenge posed by AI and set higher standards for AI-augmented resilience.
One of the most significant parallels to cybersecurity is the role of human error. Darktrace research found that Black Friday-themed phishing attacks jumped 692% in November as bad actors sought to take advantage of consumers caught up in shopping sprees. Just as cybersecurity threats often exploit a lack of awareness or poor decision making, AI-related risks frequently stem from improper use or insufficient understanding of an inherent danger.
A phishing email or a malware link can be completely harmless if the recipient is adequately trained to recognise and avoid it. Similarly, AI systems, no matter how advanced, can be made much safer by training those who interact with them. As AI becomes increasingly integrated into everyday lives, developers need to understand potential risks, while end-users must be equipped to navigate these tools with a clear awareness of its capabilities and limitations.
Aligning innovation with responsibility
The government’s focus on growth doesn’t have to come at the expense of safety. Cybersecurity is an example of how industry and government collaboration can effectively manage both hardware and software risks as a sector scales. Cybercrime is valued at five times the revenue of the Magnificent Seven stocks – and only set to grow in 2025 – but investment in cyber resilience has allowed the UK to foster innovation to fight these threats while mitigating risks. A similar approach will be essential for AI, ensuring the UK can position itself as a global leader in this transformative technology.
This means fostering an environment where businesses and government can drive AI innovation while maintaining responsible safeguards that build trust. Collaboration between the public and private sectors has been instrumental in cybersecurity, from developing safety standards to establishing clear guidelines that balance progress with security. AI will require the same level of coordinated effort.
AI represents a once-in-a-generation opportunity to drive progress and prosperity. By learning from cybersecurity’s evolution, embedding trust through governance, introducing smart regulation, improving AI training, and aligning innovation with responsibility, the UK can lead the world in building a secure, sustainable AI ecosystem.
Having a plan for AI is good, but we must move fast to deliver it and realise the huge potential that safe and secure AI adoption presents.