The European Union (EU) passed landmark artificial intelligence (AI) legislation last week, setting the tone for future regulations across the Western world. The law makes the EU the second major bloc after China to legislate for mitigating the impacts of AI’s rapid development.
The law, when passed, will require “high-risk” AIs (including generative AI products like Open AI’s ChatGPT and Dall.E and Google‘s Gemini) to pass tests over bias, accuracy and transparency, as well as impose a myriad of regulations for smaller and less sensitive AIs.
How AI is regulated will naturally have implications for how it develops, with some believing the EU’s new act to be a positive step and others that it will be too restrictive. There is widespread belief, though, that AI will ultimately be transformative across industries. In a survey of 386 people as part of GlobalData’s Tech Sentiment Polls Q4 2023 across its network of B2B websites, 92% responded that AI would either live up to all of its promise or that it was hyped but that they could still see a use for it.
EU AI Act: the pros
Most businesses appear to be in favour of the bill, emphasising the security it brings. Bruna de Castro e Silva, AI governance specialist at startup Saidot tells Verdict: “The EU AI Act continues its unstoppable march as Europe shows that it is ready to set a responsible pace of innovation for AI. This is the culmination of extensive research, consultations, and expert and legislative work, and we’re glad that the first major regulation around AI is founded on a solid risk-based approach, which is pragmatic, impact-based, and crafted following years of industry consultation.“
“The Act will ensure that AI development prioritises the protection of fundamental rights, health, and safety while maximising the enormous potential of AI. This legislation is an opportunity to set a global standard for AI governance, addressing concerns while fostering innovation within a clear responsible framework.
“While some seek to present any AI regulation in a negative light, the final text of the EU AI Act is an example of responsible and innovative legislation that prioritises technology’s impact on people. When the EU AI Act comes into force it will enhance Europe’s position as a leader in responsible AI development, establishing a model for the rest of the world to follow.”
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataDavid Mirfield, VP of product management at AI-powered risk decisioning company Provenir, believes that the act will “undoubtedly trigger a new wave of investment in AI across the [EU],” due to a “significant increase in trust in AI technology.”
Dr Chris Pedder, chief data scientist at AI-powered EdTech Obrizum, sees similar benefits in taming the “wild west of AI”, referencing “instances like WeChat’s social credit scores and ClearviewAI’s facial recognition database” that “highlight the potential dangers of unchecked AI development.”
Pedder also believes that the bill has real teeth.
“The substantial penalties outlined in the Act, such as fines of up to 7% of global revenue, give the EU real enforcement power,” he explains. “The global reach of these regulations means that companies cannot simply avoid compliance if they wish to operate internationally. Furthermore, the AI Act’s passage may inspire other progressive jurisdictions, like California, to adopt similar measures.”
His view is shared by Agur Jõgi, CTO of CRM startup Pipedrive, who adds of the power of the EU to set global legislative agendas: “Approving the AI Act will have a huge effect for industries across the globe, as the ‘Brussels effect’ kickstarts legislative changes across international borders.
“For companies, AI can present huge boosts to productivity, reducing bottlenecks from administrative tasks through automation, or providing valuable strategic insights by collating enterprise data. However, its deployment cannot go unchecked because of well-understood risks from bias, or data leakage at scale.”
EU AI Act: the cons
Not everyone is in favour of the bill, however. Most complaints revolve around stifling innovation or accusations that the bill is too broad, opening up companies to unreasonable suits.
Jamie Moles, senior technical manager at cybersecurity firm ExtraHop argues: “Cybersecurity is a continuous battle, and overly rigid rules can stifle the agility needed to adapt to emerging threats. A stronger focus on robust risk management frameworks would be more beneficial than the AI Act. This would allow innovation to flourish while mitigating potential misuse.
“Empowering developers to prioritise security throughout the development lifecycle, rather than limiting them with prescriptive technicalities, is key to building trustworthy and secure AI.”
This a view echoed by Morgan Wright, chief security advisor at cybersecurity firm SentinelOne, who goes so far as to suggest that the very bill itself might lead to security risks.
“Right now, it’s too early to tell what impact the Act will have in the long term,” Wright says. “In the short term, it will slow the delivery of services and capabilities as companies determine to what extent their technologies are subject to regulation. This provides an opening for our adversaries, who are not constrained by EU regulation.”
Particularly down on the bill is Dr Kjell Carlsson, head of AI strategy at Domino Data Lab, who argues that it poses an existential risk to businesses working with the technology.
Carlsson argues: “With the passing of the EU AI act, the scariest thing about AI is now, unequivocally, AI regulation itself. Between the astronomical fines, sweeping scope and unclear definitions, every organisation operating in the EU now runs a potentially lethal risk in their AI, machine learning and analytics-driven activities. However, using these technologies is not optional and every organisation must increase their use of AI in order to survive and thrive.”
EU AI Act: the legal ramifications
Despite the significance of the AI Act, some experts believe there are still ambiguities to be solved within its text. Ben Maling, managing associate at patent and IP-focused law firm EIP, notes that the provisions ensuring generative AIs are built on datasets agreed on are well-meaning, but currently lack the clarity required to be properly implemented.
“Among other interesting topics, the newly approved EU AI Act requires that companies training generative AI models for the EU market respect machine-readable opt-outs from text and data mining even if their servers are in the US or Timbuktu or wherever else,” he tells Verdict. “Sounds promising for the likes of New York Times, Getty Images and millions of other rightsholders who don’t want their content hoovered up to train chatbots and image generators. But how can it be done, practically?
“Web crawlers and other robots typically (optionally) use the standardised robots.txt file of a website to determine whether they are permitted to process its content. But simply denying all robots would have dramatic negative consequences on SEO and other things that matter – who doesn’t want their site to be indexed by Google? On the other hand, denying crawlers on an individual basis is totally impracticable as the number of Gen AI providers grows.
“What’s needed is a standardised way to opt out web content from scraping for the purpose of generative AI training.”
Jonathan Coote, music and AI lawyer at Bray & Krais largely agrees, adding: “This could have a similar impact to that of the General Data Protection Regulation, which created a global gold standard of compliance. Providers of general-purpose AI will need to abide by EU copyright law which restricts training on copyright works and demonstrate their compliance. Crucially, this appears to apply even if the training was carried out in another more lenient jurisdiction.
“The Act will require deep fakes, including voice clones, to be labelled as fakes. This will be welcomed by artists but won’t actually stop these deep fakes from being circulated. We may well need a separate digital representation right to stop this from happening, as has been proposed in the US.”
Such are the legal implications of the AI Act that Mark Molyneux, EMEA CTO at data management platform Cohesity, suggests that generative AI companies may need to focus on compliance rather than new features for the time being.
He comments: “Low-risk applications of AI will see a lighter touch, but the big practical uses of AI will face detailed compliance requirements, much of which will need to be in place before companies start to innovate with AI – which means they are likely in breach of the act already and will need to draw back on development to get their house in order.
“Much of the law focuses on reporting on content used to train AI, the datasets that have given it the knowledge to perform. Consider, if you will, that the earlier models were using readily available internet and book crawls to train their AI, content which included copywritten materials, one of the areas the AI Act is looking to clean up. If you have already started and used controlled datasets, you may well be starting over.”
Finally, Dexter Thillien, lead tech analyst at the Economist Intelligence Unit, makes the point that, while the legislation is important, it is not the only factor that will impact AI in the future.
“The Act will not be the only piece of legislation having an impact on the AI market,” says Thillien. “The GDPR, with its own focus on personal data protection, the DMA, with its focus on competition, the DSA, with its focus on content, and potential new rules on security, data, and algorithms, will also be applicable.
“As with many legislations, enforcement will be critical. We’ve seen how difficult it was with the GDPR, but it seems the Commission has learnt from the experience as the early evidence from both the DMA and DSA suggests it has much greater clout, even if it still needs to find the right staff with the relevant technical expertise for the AI market.
“There is a geopolitical competition when it comes to AI regulation. Most countries understand the need for regulation, but they differ in how to implement it (for instance, the UK is very light-touch and sector-specific so far). It is, however, another example of the increased fragmentation of the tech sector, which has benefitted more than any other sector from globalisation.”