<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">
Skip to content

Preparing for the EU’s Artificial Intelligence Act

Frans van Bruggen, Policy Officer on FinTech and AI at De Nederlandsche Bank, shares his involvement with the EU’s upcoming AI Act and how it might affect business



The EU became the first regulator in the world to propose a legal framework for artificial intelligence in 2021. The framework aims to standardize AI regulations and mitigate associated risks, and will inform the EU’s proposed Artificial Intelligence Act.

Frans van Bruggen is the Policy Officer on FinTech and AI at the Dutch central bank, De Nederlandsche Bank (DNB). He’s also one of the policymakers behind the EU’s upcoming AI act. 

In this week’s Business of Data podcast, van Bruggen speaks to us about what rules the act might impose, the four categories that makes up its risk-based approach to AI and what this all means for business. 

“The European Commission is still negotiating the AI act,” van Bruggen says. “This will be a horizontal regulation, and making it risk-based means the higher the risk the AI poses to the public, the more it will be regulated. Unacceptable risk applications [such as social scoring or biometric systems used for law enforcement in public spaces] will be completely prohibited.”

Closing the ‘AI Responsibility Gap’

Our research suggests that company executives outside of risk- and AI-focused teams still generally lack a complete understanding of the role AI ethics plays in a modern enterprise and why it’s important. While concerns such as explainability and bias are usually top of the agenda, van Bruggen believes there are other areas that should be getting more attention.

He explains: “There’s what I call the ‘responsibility gap’. Now that we have more machines deployed across [financial] institutions and making certain decisions, it doesn’t take the responsibility of the AI’s outcomes from humans. Sometimes, we think, ‘It’s a machine. I’m not responsible for the output.’ 

“Where in the organization do you place the responsibility? I think it should be placed high up, with the Chief Risk Officer or a similar role. Another area to consider is federated learning. That means data to train the model is centralized, the algorithm goes to the data, and this helps address some privacy concerns around AI.”

Ultimately, van Bruggen predicts that new EU regulations for responsible AI use will come into effect in 2024. That means, AI-focused executives have roughly two years to prepare their organizations. 

“Negotiating its terms takes a lot of time because there are a lot of member states involved,” he concludes. “Each one is coming with its own values so there are different perspectives on what should be regulated. For instance, there are a lot of issues around the definition of AI, what its scope is and which applications should be in the unacceptable risk category. 

“If the GDPR is anything to go by, this regulation could have global implications. So, it’ll have to be reviewed regularly, because we don't know how AI is going to develop in the future.”

Key Takeaways

  • The EU's AI Act may come into force in 2024. The act is still under negotiation by EU states. But executives should be considering its implications for their AI strategies now
  • Take a risk-based approach to AI governance. AI leaders should implement AI ethics policies that ensure stricter governance processes are followed for AI use case that have greater potential to impact consumers’ lives negatively if implemented poorly
  • Responsible AI should be a C-Suite issue. Van Bruggen believes responsibility for ensuring AI is used responsibly should fall to the Chief Risk Officer or a similar role

To get more insights from Europe's data & analytics experts, attend our event, Chief Data & Analytics Officers, Europe taking place in Amsterdam on 18-30 October. Register your interest to get updates about the latest speakers, agenda sessions and ticket options.