Broadly speaking, there are three forms of AI regulation: risk-based, sector-by-sector, and principle-based.

Most countries that have implemented legislation favour a risk-based approach, while a small number differ. For example, the UK is following a sector-by-sector approach and Japan is using principle-based governance.

A risk-based approach “involves determining the scale or scope of risks related to a concrete situation and a recognised threat”, essentially meaning that a governing body proposes regulation based on thought-of and realised threats. That governing body then creates a hierarchy of sorts to categorise AI systems by risk level based on the scale or number of said threats.

The state of global, risk-based AI regulation

Many nations have proposed risk-based AI regulation, including most of South America, Canada, and Australia. The largest single piece of implemented AI legislation is the EU AI Act, which categorises risk on four broad levels: unacceptable, high, limited, and minimal. Risk levels are tiered, with unacceptable risk being prohibited in most cases while minimal risk systems will be allowed to operate freely.

There are a multitude of reasons why a system may have an unacceptable level of risk, including exploiting vulnerabilities or conducting biometric categorisation. High-risk categories must also be placed on a register. However, risk-based regulations come with complications of their own.

AI as a product

AI is a rapidly developing technology, with new uses being discovered daily. For AI regulation to be efficient, it needs to balance protecting consumers with fostering innovation. Any proposed bill would also have to be broad enough to be applicable in all relevant circumstances, while also being specific enough to penalise those who maliciously use AI technology.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

For the EU AI Act, 47% of industry experts question if companies will be able to fit the Act’s timeline, according to the MIT Sloan Review and Boston Consulting Group. Some concerns governing AI in specific use cases, like the EU AI Act does, might be too specific.

What if an AI reacts differently to how the user wants it to? For example, gender recognition isn’t prohibited in the EU AI Act and is usually seen as a trivial use of AI. However, AI has been shown to discriminate against gender nonconforming persons by misgendering them or forcibly assigning them a gender identity. If a rule (or risk level) is defined too specifically, it creates the risk of easy circumvention. What would happen if the AI with significant public responsibility acted based on this information?

Outlined rules can quickly become outdated in more ways than one, and an overly precautious risk-based system can stifle innovation well in advance. Risky systems that don’t fit a high or unacceptable risk profile can also slip through the cracks, despite potentially being harmful. Additionally, risk-based AI regulation is principled on the basis that one can easily find, isolate, and ban AI systems through tiers of risk, which often isn’t the case.

Overall, framing AI regulation in terms of risks encourages the idea that we could introduce bills that would ‘lower’ that risk. Sometimes these systems undermine our rights in a way that cannot be mitigated. Given the glaring holes in a risk-based approach, is there any other new way of comprehensively regulating AI?

A rights-based approach?

Many have argued that there is a better way of regulating AI, by governing how systems impact human rights. The line of thought goes that, as technology is developing, we have a set of rights that cannot be infringed upon by AI. This would give both a clear framework for companies to avoid, a framework for regulators to enforce, and a future-proof set of rules that wouldn’t, in theory, harm innovation given enough time to prepare.

Those who advocate for a rights-based approach have a point on a philosophical and practical basis. GDPR, which the EU claims was a risk-based approach, was ultimately predicated on rights that couldn’t be infringed. The same can’t be said for the EU AI Act, as it outlaws certain functions rather than protecting rights. GDPR was effective because it blanket banned all violations of given rights, not because the bill allowed for certain uses of personal data.

Ultimately, clarity is needed

While this article may seem critical of risk-based approaches to AI regulation, the EU AI Act is still the best-developed AI regulatory bill the world has seen to date. The lack of detailed AI guidance, framework, and regulatory bodies in most other countries leaves something to be desired. However, no one piece of regulation is perfect, and a rights-based approach would be practically unfeasible to enforce, as permission for data scraping would be impossible to request from so many people.

Popular culture has long predicted the dangers of AI, with rogue AI systems that turn against humanity being the most represented form of agentic AI in the media. Statements akin to Rishi Sunak’s summary of AI jurisprudence, that we should take risks seriously but should not be in “a rush to regulate” AI, are predicated too much on fostering technological investment rather than accurately assessing the future and current dangers of the technology.

Clarity on future policy matters, and a risk-based framework with more concrete definitions, more stringent requirements and obligations, and harsher punishments given a systems risk level is likely the best blanket approach for governments, with specifics depending on the value and strategic importance a nation places on AI.