The G7 is set to agree on a code of conduct for companies creating advanced artificial intelligence (AI) systems, according to G7 documents. 

The news comes ahead of the UK’s AI Safety Summit on 1 November, where global leaders and businesses will look at getting ahead of the rapidly growing technology, hoping to mitigate its risks and misuse.

In a document viewed by Reuters, the G7 said its 11-point code of conduct “aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organisations developing the most advanced AI systems.”

This includes the most advanced foundation models and generative AI systems, the document added. 

Companies are urged by the G7 to proceed with caution when developing AI and input measures to identify, evaluate and mitigate risks across the technology’s lifecycle. 

Earlier this month, vice president of the European Commission for Values and Transparency, Vera Jourova, said the new code of conduct would work act as a bridge until regulation is properly installed.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The EU has been leading the way with its AI act introduced in June 2023.

Michael Queenan, CEO and co-founder of UK-based data services integrator Nephos Technologies, believes that a code of conduct will be difficult for the G7 to agree on and should focus on areas that can add the most value.

“It will be hard to get world leaders in the G7 to align on conduct when so much is at stake here,” Queenan told Verdict.

He added: “Whoever gets there first has a huge power and economic opportunity. The more advanced AI capabilities a country has, the more they can advance their society, but simultaneously, with power comes danger.”

Queenan said recent discussions around AI safety policies have focused on what models are safe to use. However, he believes they should be discussing which areas it can add the most value.

“A code of conduct should be addressing how to protect humans whilst doing that,” Queenan said, “guardrails are not the same as safety but will ensure that those using and designing AI models are doing so responsibly.”

Laura Petrone, analyst at research company GlobalData, said that countries like Japan and the US have taken a hands-off approach to AI regulation. 

Petrone believes that it is “critical” for the G7 to include all of these different views and approaches in the code of conduct. 

“However, the aim of such an initiative should also be that of setting the discussion around how to best regulate AI without limiting innovation,” Petrone added. 

“Identifying the AI risks alone is not enough and European Commission digital chief Vera Jourova is right to point out that this code should act as a bridge until regulation is in place,” she said.

White House to take action on AI

On Monday, US President Joe Biden announced his administration would be taking action on AI. It follows the UK announcing the launch of a “world’s first” AI safety institute ahead of its hosting of the AI Safety Summit on 1 November.

Biden’s executive order will look to set parameters around the emerging technology, as so far, the US has adopted a fairly hands off approach.

The order will require developers of AI systems that could potentially pose risks to US national security or public health to share the results of its safety tests before it could be released to the public.

The tests are to be sent to the US government and will need to remain in line with the Defense Protection Act.

The new rules also call for the installation of “best practises” to address the effects of AI may cause in job displacement and workers.

Biden’s new orders go further than the voluntary commitments made by OpenAI, Alphabet and Meta earlier this year. The leading AI companies agreed to watermark all AI-generated content to quell fears of misinformation and misuse.

Bruce Reed, White House deputy chief of staff, described the new order as the “strongest set of actions” any government had ever taken on AI regulation.

“It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” he said..