As AI rapidly evolves, we must make smart choices to reap the rewards while controlling risks. This means recognising the control that humans have because AI isn’t an unpredictable external force like inflation. Government and business leaders worldwide should start with that premise to create balanced regulation that benefits everyone – and there’s no time to waste

This year’s AI Seoul Summit in May – successor to last year’s Bletchley Park AI Summit – seemed promising but substantial results are needed. Last October, President Biden’s executive order to create AI safeguards proved auspicious. Now, across the pond, with a new government in place, AI regulation is an unmissable opportunity for a new government to get right and set a global example.

AI is a vast field, so understanding the difference between ‘narrow’ and ‘general’ AI is crucial. Narrow AI – designed for specific tasks set by humans, like writing marketing copy from prompts – is safer and offers immediate value. Conversely, general AI systems – which mimic human intelligence across various tasks – are far more risky and less defined. Artificial general intelligence (AGI) doesn’t exist yet but technologies like GPT4 are a step in that direction.

As such, regulators should see their role as accelerating the adoption of safe, narrow AI technology – paving the way for a soft landing into society, delivering tools without allowing harm. With narrow AI models, sector regulators from healthcare to education already know the problems they want to prevent. The regulatory challenge is to ensure that AI applications don’t exacerbate these issues, which existing institutions are likely to be able to handle.

Malicious use of AI poses regulatory challenges

Regrettably, powerful models could end up in the hands of malicious actors. This presents newer, more complex challenges that traditional vertical regulatory methods will struggle to handle. Deepfakes, autonomous weapons, and large-scale misinformation campaigns indicate a need for a broader regulatory strategy that’s currently lacking.

Today’s prevention mechanisms rely on model safety training. Models are built by feeding information from the internet and teaching them to understand language. The result is powerful technology, but still devoid of morality. To fix this, a safety layer is applied on top of the model. Post-safety training, asking an AI model for advice on how to do something bad, like launching a lone-wolf terror attack, shouldn’t yield an answer.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

However, untrained versions of these models still exist on the servers of the companies training them and could be misused if accessed. Open-source AI language model LLaMA, which Meta originally developed, leaked last year after it had undergone safety training. If a pre-safety-trained model ends up on the dark web, criminals worldwide can access it. Criminals shouldn’t be granted the productivity gains from these models that consultants, web developers, and copywriters have.

But these models are currently not subject to regulation. This is a gap in the sector-focused approach. The horizontal approach of the now legally enforceable EU AI Act begins to address problems like this, reserving the ability to demand security standards for how certain AI models in development are stored. However, the threshold at which these requirements apply is currently reserved for models deemed to pose “systemic risk”. Safety-trained models that fall below this bar could still cause harm in the wrong hands, and the threshold should be lowered to cover them.

Moreover, when considering general AI models, any possibility of achieving AGI should be taken extremely seriously and approached differently. The EU’s regulatory framework focuses on models with computational complexities similar to today’s AI, like ChatGPT. However, the gap between current models and potential future ones that might approach AGI capabilities is significant and concerning.

Regulate AGI as if it were nuclear technology

Restrictive measures akin to those for biological and nuclear technologies – after all, this isn’t the first time humans have had to regulate to keep themselves safe – may become necessary for AGI. Despite fears, and some high-profile tragedies, humanity has been reasonably successful at preventing both nuclear weapon development and nuclear disasters. This results from extremely stringent regulation and inspection regimes, delivered through bodies like the IAEA.

Without granting independent experts the power to routinely inspect companies, publish findings, and fall back on a tough sanctions regime if violations are found, the past could haunt humanity. Fukushima and Chernobyl are stark reminders of the consequences of scant regulation and opacity.

The UK’s AI Safety Institute is a strong step towards preventing history from repeating itself. More nations should build on this progress, focusing on safe applications and investing in safety technology to inspect, test, and monitor frontier models. This would help regulate algorithmic power if necessary – like how sovereign access to nuclear ingredients is tracked and controlled internationally.

Regarding the research needed for the technical capabilities to control these frontier models, it’s reasonable that at least some of these costs should sit with the companies building those models. They could be required to invest time and resources into AI safety research, in proportion to the total sum they spend on building the underlying technology. This would be in much the same way as pharma companies are responsible for proving the safety of the drugs they develop.
Many questions still need to be answered about building technologies and tools that allow us to interrogate, understand, and control AI models. However, regulation should be approached by focusing on opportunities first, rather than fear-mongering. That will be the key to getting regulation right for everyone.