It has been a busy few weeks for observers of AI in the European continent: firstly, the issuance of new guidance around the AI Act, the most comprehensive regulatory framework for AI to date; secondly, the AI Action Summit, hosted by France and co-chaired by India.
The stakes were high, with almost 100 countries and more than 1,000 private sector and civil society representatives in attendance, and the ensuing debate delivered in spades.
With the summit following the latest issuance of the AI Act by a matter of days, part of the event concentrated on issues around regulation.
As part of the summit, the EU InvestAI €200bn ($210bn) initiative, to finance four AI gigafactories to train large AI models, was launched as part of a broader strategy to foster open and collaborative development of advanced AI models in the EU.
The AI Action Summit provided a platform to ponder the question: does innovation trump regulation? However, it can be argued that ignoring the risks inherent to AI will not accelerate innovation, and that the current European challenges have more to do with market fragmentation and less dynamism when it comes to venture capital. It is important to consider the need for democratic governments to enact practical measures, rather than offer platitudes, focusing on risks to social, political and economic stability around the world as a result of the misuse of AI models.
GlobalData analyst Beatriz Valle commented: “Comprehensive regulation provides a framework of market stability that can lead to greater adoption of AI, as organizations feel more confident about compliance and this in turn leads to greater investment. Explainability is also enshrined in this legislation, something that in turn promotes and fosters Responsible AI. Companies may have to share information about why an AI system has made a prediction and taken an action; this also accelerates research and benefits everyone.”
AI Act – a four tier system
The AI Act follows a four-tier risk-based system. The highest level, “unacceptable risk”, includes AI systems considered a clear threat to societal safety.
Unacceptable risk
Eight practices are included: harmful AI-based manipulation and deception; harmful AI-based exploitation of vulnerabilities; social scoring; individual criminal offence risk assessment or prediction; untargeted scraping of the internet or CCTV material to create or expand facial recognition databases; emotion recognition in workplaces and education institutions; biometric categorisation to deduce certain protected characteristics; and, real-time remote biometric identification for law enforcement purposes in publicly accessible spaces.
Provisions within this level, which includes scraping the internet to create facial recognition databases, already came into force on 2 February, 2025. These systems are now banned.
High risk
The next level down, the “high-risk” level includes AI use cases that can pose serious risks to health, safety or fundamental rights, including threats to critical infrastructures such as transport, the failure of which could put the life and health of citizens at risk, and AI solutions used in education institutions, that may determine the access to education and course of someone’s professional life, such as scoring of exams, as well as AI-based safety components of products, like AI application in robot-assisted surgery.
Although they will not be banned, high-risk AI systems will be subject to legal obligations before they can be put on the market, including adequate risk assessment and mitigation systems and detailed documentation providing all information necessary.
Limited risk
This covers lighter AI transparency obligations. It may mean that developers and deployers ensure that end-users are aware that they are interacting with AI – specifically in practical cases such as with chatbots and deepfakes.
The AI Act has specific transparency obligations for such limited risk systems..
Minimal or no risk
Following from the limited-risk level, there is “minimal or no risk”. These systems face no obligations under the AI Act due to their minimal risk to citizens’ rights and safety. Companies can voluntarily adopt additional codes of conduct.
AI Act guidance impact
Companies not complying with the rules will be fined. Fines could go up to 7% of the global annual turnover for violations of banned AI applications, up to 3% for violations of other obligations and up to 1.5% for supplying incorrect information.
During the summit, the potential impact of this new guidance was discussed, with the US contribution mainly limited to criticising European regulation and warning against cooperation with China. The US and the UK refused to sign the AI Action Statement, the summit declaration on ‘inclusive’ AI, a snub that dashed hopes for a unified approach to regulating the technology.
The document was backed by 60 signatories including Australia, Canada, China, France, India and Japan. The US said it prefers to prioritise so-called pro-growth AI policies over safety, while France, as host of the summit, stated that there is a need for rules that lay down the ground for faster adoption and growth.