The European Union could soon impose restrictions on various uses of AI, according to draft legislation leaked online. If the draft is implemented, the EU would outlaw China-style social credit checks, but would permit the use of facial recognition technology for mass surveillance by law enforcement agencies in some circumstances. The rules could affect companies within and outside the bloc.
Members of the European Parliament (MEPs) have, however, criticised the draft for not doing enough. Some want the EU to completely ban facial recognition software from being used in public spaces. Others have celebrated the draft for putting an end to the Wild West of tech self-regulation.
Politico first broke the news of the leaked document earlier this week. The European Commission, the EU’s executive branch, will officially announce the proposals or a version of them on 21 April. The draft legislation will then be subject to a vote. If implemented, businesses breaking the law could be fined €20m or 4% of their global turnover.
The draft proposes that AI mass surveillance should be prohibited for most purposes, including commercial ones. That could include deploying facial recognition software in public spaces. The leaked document acknowledges that AI technology “can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful” and that it could be detrimental to “human dignity, freedom, democracy, the rule of law and respect for human rights.”
However the draft rules make exceptions for government agencies to use AI in mass surveillance for fighting serious crime and terrorism. These provisions have already been criticised by privacy advocates.
If implemented, the proposals would also restrict AI from being used to introduce social credit scoring systems like the ones in China where AI is used to monitor citizens. Chinese citizens committing misdeeds such as eating on public transport or playing loud music can face a variety of penalties under the social-credit systems, which can include restrictions on movement or limited educational opportunities.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe leaked EU draft legislation suggested that “algorithmic social scoring of natural persons” should be restricted unless “carried out for a specific legitimate purpose of evaluation and classification”.
The proposals would also ban high-risk uses of AI from being deployed within the EU unless those solutions meet certain standards.
Countries outside the EU are not legally required to implement the new rules if they are voted through. Non-EU companies or entire nations, however, might choose to do so in order to trade more smoothly with the bloc. That has often been the case with the EU’s General Data Protection Regulation (GDPR), introduced in May 2018. While the UK was in the process of Brexit by then, the country still decided to implement GDPR, and as a result it is comparatively simple to do various kinds of data related business in both directions. The Indian government is working on introducing regulations equivalent to GDPR, as this will aid its trade with both the EU and the UK, and similar plans have been discussed in many other nations around the world.
After the draft AI rules were shared online, 40 MEPs signed an open letter to the European Commission. They called for a ban on facial recognition software being used in public spaces.
“Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed,” the MEPs wrote.
They also warned against using AI to track individual characteristics such as ethnicity, sexuality and disability. The MEPs warned it could violate “rights to privacy and data protection”, suppress free speech and prevent corruption from being exposed.
The European Commission has so far refused to comment on the leaked document.
“The Commission is set to adopt the regulatory framework on AI next Wednesday 21 April 2021. Any text that you might see before is therefore by definition not ‘legitimate’ – we do not comment on leaks,” a European Commission spokesperson told CNBC.
Peter van der Putten, assistant professor of AI at Leiden University and director of decisioning at Pegasystems, said that the EU draft legislation demonstrated that lawmakers have grown tired of companies regulating themselves.
“There is a common opinion from everyone, including AI technology providers and companies consuming these services, that self-regulation is not sufficient and some clear rules and boundary conditions will actually make it easier for companies to invest in responsible and trustworthy AI,” van der Putten said. “Whilst driving the acceptance of trustworthy AI might seem a lofty goal, it should be clear that the real goal is for AI applications to become truly worthy of our trust, by making the technology fair, transparent, explainable and robust.”