The European Parliament is not often thought of as an organisation that is particularly friendly to artificial intelligence (AI). Last month, it approved a landmark bill regulating the technology and tends towards caution on new technologies of all kinds. Despite this, its use is surprisingly widespread behind the scenes.

At the Three Seas Summit in Vilnius, Lithuania, political and business leaders from its member states discussed the political and economic future of the region. The day after, Verdict sat down with Pierluigi Casale, one of the architects of the European Parliament’s own AI software to discuss adoption, security and compliance.

Casale is an innovation officer for data science and AI at the European Parliament, currently on personal leave. His views are personal and do not necessarily reflect the views of the European Parliament.

This interview has been edited for clarity and concision.

Is your job to find tools and alter them to fit the European Parliament’s needs, or to build them in-house?

It’s a combination of both. Of course, we don’t reinvent the wheel. If there is a solution that fits our needs, we go for it, but sometimes we need to create an ad-hoc solution for the needs of departments and then we build it ourselves.

For example, before leaving, we worked on this advanced analytics service that was basically a mixed cloud and on-premises service. It allows the workers of the parliament to upload their own documents and get a summary of them or see the personnel information that’s in the document using AI.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

These are the types of services that workers need. We were talking with a head-of-unit and she said to us, “I know if I’m in a rush and they give a document to one of my people in the evening, the morning after they will not have read it. With this tool, I can be quite sure that they make a summary and they get a grasp of it.”

What other uses are there for AI in the European Parliament, either currently or in the near future?

There are several uses. I was just saying earlier, I like the word intellect better than intelligence because intellect is somehow that you acquire knowledge and intelligence is something that is innate in us. One of the key things is this advanced analytics capability. The parliament generates a lot of text. The AI Act is more than 250 pages. Who is going to read that? This is where you need to act.

What I think is becoming big now in the parliament, and the European institution in general, is using AI for two points: cybersecurity and programming. Cybersecurity because we are under threat, right? We are in a period when there is an imminent threat from everywhere.

Something that is important for public institutions that already private institutions and big companies are doing is using AI as a companion for creating technology and for programming. There is already a very established practice called paired programming, where you have somebody at your side whilst you’re programming, and this can be done with AI. It already happens in big companies. Public institutions need to bridge this gap and then use AI to speed up and improve.

The European Parliament has very specific cybersecurity needs due to its importance – how do you make sure the tools you use are secure?

Since I’ve not been there for a year, I cannot tell you the updated status, but let’s put it this way. There is always a human component in software and AI. Everything can be automated to a certain extent, but in the end, you need people to validate that it is correct, and there are several phases of that.

In the beginning, you need to have a group of people who validate that your software is doing what is correct. Even in cybersecurity, you can use simulations to test threats. Then you need to validate that your automated threat solution is actually identifying which of the threats are real and which are caused by a malfunction.

After that, you have several more stages. This could be a small or a larger group of people but what’s important is that you allow people to give feedback. This is still important after you go to the full parliament, but by then you’ve already collected so much data that you can predict a lot of the problems that might arise.

Is there a skills gap in the European Parliament that’s slowing down the proliferation of this technology?

It’s a good point, but I can tell you from working in private tech companies around Europe that this is not only a problem for the European Parliament, but also for big companies. People are just doing their work all the time and they forget to update themselves.

I’m also a professor teaching AI data science and ethical regulations at an online university, but, before that, I was teaching in the Netherlands. I was teaching there to both students and professionals who want to upskill themselves and get updated on this. This is fundamental, but in the AI Act it is written that companies using AI should provide training to workers to actually understand what they’re using.

This is very important because these tools need to be used properly. ChatGPT is a great example because if you write the prompts better, you get better results. This is where people need to be trained. One of my students came from risk management and started the course because she was worried about AI, but she told me that after finding out what it is, she was not worried anymore. This is the point, right? If workers get to know the technology, they can control it, not fear it.

It has been suggested that workers in India and sub-Saharan Africa have done some work professed to be AI – how is the European Parliament making sure that it is not exploiting workers in other countries?

This is where what I was saying at the beginning about our ad-hoc solutions is important. We use a lot of open-source software, which allows us to make sure that the software not only works but is ethical too.

There is of course a lot of lobbying from big tech companies trying to push us to use certain solutions because we’re a big client, but we push them too, especially on the innovation side. We also do training for our data scientists to ensure that they know about the ethics and regulation of this software.

How possible do you think it is for Europe to chart its own course in AI and break free of the US and other foreign states?

This is a fundamental point, especially considering the geopolitical situation now. We need to always think that the US are our allies, because we are friends with them, and there are other countries that are not so much. Break free, I think, is a big word. At the end of the day, technology is not democratic. Not everybody can have the same technology.

If we do AI, it’s based on hardware that is produced basically in the US. Nvidia GPUs are the fundamental hardware, and we cannot break free of this. In Europe, there is a great company called ASML, based in the Netherlands. This is a fundamental company because it’s one of the biggest companies that produces the machines that produce semiconductors. Even the US doesn’t have this.

Because of the political situation in Taiwan, the US is now creating semiconductor factories like the ones there in Texas. They’re afraid that something could happen in Taiwan and we’ll lose the leverage on building chips and the knowledge they have. We cannot break free because of these constraints, right?

What we can do in Europe is keep building knowledge so that when we need to use AI, we have the freedom to choose. That’s what’s lacking now. We don’t have the freedom to choose an AI model from the US, from Europe or even from China or Russia. This is fundamental, but again, the technology is not democratic.