South Korea is hosting the AI Safety Summit today (21 May) in collaboration with the UK following the UK’s inaugural AI Safety Summit in Bletchley Park last November.
The summit is intended to be a continuation of the UK’s AI Safety Summit which took place in November 2023 at the UK’s historic Bletchley Park estate. The 2024 Seoul mini-summit will see virtual discussions co-hosted by UK Prime Minister Rishi Sunak alongside South Korean President Yoon Suk Yeol.
Heather Dawe, chief data scientist at IT company UST, said that the rapid pace of AI’s development often meant that drafted AI acts were left outdated by the time of their passing.
Dawe also stated that each country had its own approach to regulating AI, which could delay global consolidated principles from being agreed upon.
“More international cooperation and consensus building would help to mitigate this. For example the US and UK are now partnering more closely together, jointly developing tests for advanced AI models,” said Dawe.
“Such partnerships will likely speed up the development of the tools we have to regulate AI and build consensus on how AI should be regulated at a faster pace,” she said.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataCountry delegations that attended the UK’s AI Safety Summit signed the Bletchley Declaration, pledging to cooperate on creating human-centric AI regulations that simultaneously promote the technology’s advancement and mitigate risk.
Seven months later, how have the signee countries approached AI regulation?
The European Union
The European Union’s (EU) AI Act is widely viewed as leading the global playing field on AI regulation. The AI Act is the world’s first legal framework that formally addresses the risks posed by AI.
On 2 February this year, the EU AI Act was finalised and endorsed by all 27 EU member followed by the European Parliament on 13 March. The Act will face a vote by the Council of the EU, followed by publication in the EU’s Official Journal.
The US
At the time of writing, there is currently no all-encompassing AI regulation in the US. However, the White House did issue an executive order on AI to establish AI safety standards to avoid a self-regulating AI market.
This order requires AI developers to share safety test results of their AI models with the US government to mitigate any potential risks posed by large-scale AI foundational models available on the open market.
The order also stated that AI companies needed to take steps to mitigate the risks their AI poses to US consumers through AI-enabled fraud.
In February 2024, the US made AI-enabled robocalls illegal following a series of hoax phone calls that were generated to sound like President Biden. The calls encouraged residents in the state of New Hampshire to not vote in upcoming primary elections, potentially affecting the outcome of the vote.
The US Federal Communications Commission amended its 1990s era Telephone Consumer Protection Act to include the use of generative AI technology in faked phone calls.
The US has also scrutinised how AI is used in anti-competitive behaviour.
Gary Gensler, the chair of the US Securities and Exchange Commission (SEC), warned businesses against cooperating in AI washing during a speech at Yale Law School. Gensler stated that any business advertising its products as having AI features or tools must be based on reasonable claims about the technology.
Since his warning in February 2024, investment advisors Delphia and Global Predictions were fined $400K by the SEC for misrepresenting its use of AI technology.
Following the fine, the SEC’s Office of Investor Education and Advocacy published an Investor Alert warning investors about the potential use of AI in investment fraud.
More recently in May, the Biden administration also released a set of key principles alongside the Department of Labour to protect workers’ wage and hour rates against their employers’ use of AI.
Microsoft and Indeed have already committed to adopting these principles, which include the responsible use of worker data in AI training.
Under the principles, companies must limit the worker data they collect and use in the training of their AI models. Any company ingesting worker data into AI models needs to handle this data safely and only use it to support legitimate business operations.
South Korea
Like the US, South Korea does not currently have a specific act for AI. Multiple proposed acts have been put forward to South Korea’s National Assembly since 2022, the most prominent being the Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI (AI Act).
The proposed act takes a technology adoption first stance, aiming to regulate AI after it has been adopted into the mainstream by businesses. This approach has sparked controversy.
The Association for Progressive Communications (APC), an international communications infrastructure and human rights advocacy group, stated that Korean civil society widely rejects the approach of the proposed bill.
The APC stated that many civil society organisations had instead drafted their own AI bill drafts, but these have not been put forward to South Korea’s National Assembly.
Despite its light-touch approach to drafting a national AI bill, South Korea has provided consumer and data protections for explainable AI.
South Korea’s Personal Information Protection Act grants citizens the right to explanations for automated decisions that pose a significant impact on their life. While not specifically written for AI, it does apply to cases where AI has been used to automate decisions and South Korean citizens can refuse an automated decision if the decision can be reprocessed with human intervention instead.
The UK
The UK government acts under AI guidelines that centre around the core principles of security, safety, robustness, transparency, fairness, redress and governance.
On 6 February 2024, the UK government amended its AI whitepaper to include a pro-innovation response to AI regulation. In its amendment, the UK government stated that it expected AI developers to adhere to existing UK law, including data protection, across the AI lifecycle.
Following its AI Safety Summit, the UK launched its AI Safety Institute which has since opened an office in San Francisco.
The office is expected to open this summer and will cement the relationship between the US and UK on AI safety.
Despite its efforts to become a global leader in AI regulation, a report published in February by the UK’s House of Lords Communications and Digital Committee warned that its government may not be taking a holistic approach to AI regulation.
The reported warned that the UK’s economy could risk losing out on AI growth if stringent regulations are put in place before the technology can fully develop and be deployed, especially smaller tech companies.
The UK’s Competition and Markets Authority (CMA) did release a warning about interconnected AI partnerships between Big Tech companies in April this year, advising that they may be stifling competition in the UK’s AI market.
The CMA investigated Microsoft’s involvement with Mistral AI and Inflection, but this was dropped on 17 May.
UN General Assembly
In March, the United Nations General Assembly approved the first United Nations resolution on AI. The resolution, was sponsored by the US and co-sponsored by 123 countries, including China.
The resolution was adopted by consensus and without a vote, as support for the resolution was unanimous and included all 193 member states of the United Nations.