California Governor, Gavin Newsom, has killed a state-level AI safety bill as the global debate around how to regulate AI intensifies.

The bill, officially known as SB 1047, targets companies developing large language models that power generative AI tools. But according to a Wall Street Journal report on 29 September, citing a person close to the governor, the governor vetoed the bill because it only applies to the biggest and most expensive AI models, leaving the rest of a rapidly growing AI market essentially unregulated.

Indeed, the Governor’s note on the veto decision read: “By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.

Smaller, specialised models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”

Newsom’s stance comes within days of the UN launching the final report from its High-level Advisory Body on Artificial Intelligence. UN secretary general António Guterres warned that “there are serious concerns that the power of AI is concentrated within a small group of companies and countries.”

Guterres recommended an international AI regulation architecture that is inclusive, agile and effective rather than the “patchwork” of global guidelines and treaties.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The UK hosted the first global summit on AI regulation at Bletchley Park in November 2023, followed by a second summit, in May this year, in Seoul. Both events were attended by G7 technology leaders and appeared to be a promising start to multilateral dialogue on the matter. But a global framework for regulating AI is yet to emerge.

Instead, the Biden administration in the US issued an executive order to create safeguards in October 2023, the UK published its AI Regulation Framework in February and the the EU passed the world’s most comprehensive regulation, the EU AI Act in March this year. Switzerland, Canada, Brazil, China, Japan and India are all in the process of developing their own AI safety policies.

Despite widespread support for AI guardrails across and within countries, disagreements prevail among stakeholders when establishing actual regulations for AI systems, according to GlobalData principal analyst Laura Petrone, who noted that the California governor’s veto of the state’s AI legislation exemplifies this.

In addition, the veto demonstrates how, despite its rhetoric on AI safety, Big Tech is building a powerful front against any regulation that may threaten its AI plans. “It is unsurprising that Europe, where these same companies don’t exert the same level of influence, managed to pass the first comprehensive AI regulation, the EU AI Act,” said Petrone.

“However, even in that case, disagreements among EU states and regulators on implementing the rules will likely emerge over the coming years,” added Petrone.

Further uncertainty around AI regulation policy has been driven by the number of national elections taking place in 2024, presenting the potential for regime change. The UK’s change of government in July represents an opportunity for the UK’s new Labour government to get in front of AI policy development, according to Marc Warner, CEO of UK AI software company Faculty. “With a new government in place, AI regulation is an unmissable opportunity for a new government to get it right and set a global example,” said Warner.

Warner echoes Governor Newsom’s concern about the need for regulation with a broad reach. Warner noted that there are opensource models available that have not been safety-trained against malicious actors.

“If a pre-safety-trained model ends up on the dark web, criminals worldwide can access it. Criminals shouldn’t be granted the productivity gains from these models that consultants, web developers, and copywriters have,” said Warner.

Such models are currently not subject to regulation. “This is a gap in the sector-focused approach. The horizontal approach of the now legally enforceable EU AI Act begins to address problems like this, reserving the ability to demand security standards for how certain AI models in development are stored,” he said.

“However, the threshold at which these requirements apply is currently reserved for models deemed to pose “systemic risk”. Safety-trained models that fall below this bar could still cause harm in the wrong hands, and the threshold should be lowered to cover them,” added Warner.

Furthermore, any regulation should also take into account the potential for achieving general artificial intelligence, according to Warner. “The EU’s regulatory framework focuses on models with computational complexities similar to today’s AI, like ChatGPT. However, the gap between current models and potential future ones that might approach AGI capabilities is significant and concerning,” he added.