This pre-conference interview explores practical AI governance, from EU AI Act compliance to incident response, with insights from Andrea Isoni.

  1. Many organisations are struggling to move from AI principles to real controls. What is the biggest gap you see between AI policy and how AI is actually used in practice?

Given the EU AI Act (and other frameworks), companies are still lagging behind in implementing governance for various reasons: uncertainty about whether AI solutions (or bespoke projects) will have a valid ROI (and thus be scalable), uncertainty about how to implement cost-effective AI governance, and confusion over how to ‘split responsibility’ between the company and AI vendor(s) (if any). AI governance certifications like ISO 42001 or the NIST AI RMF framework exist to ‘show the effort a company makes towards regulations’ (often called presumption of conformity), but bridging that gap remains a challenge.

  1. With the EU AI Act and new global regulations coming into force, what are the most common risks companies are not yet prepared for?

The primary risk is using open-source or even proprietary code that is, sadly, not compliant with the EU AI Act or other regulations. As of January 2026, there are vendors that have not signed the EU AI Act’s “General Purpose AI” Code of Practice: Grok (xAI) is only partially compliant, while Meta Llama and most Chinese vendors are not compliant.

Alternatively, a company could host models locally on compliant infrastructure, effectively taking full responsibility. However, I can hardly imagine many companies being able to be fully compliant this way, as it would likely cost more than the ROI they would gain from using the solution.

Currently, Microsoft, Google, Anthropic, OpenAI, and Mistral are fully compliant. However, using them still exposes a company to two specific risks.

First, if a company uses the model to train with its own proprietary data (i.e., fine-tuning), the client company effectively becomes a ‘provider.’ This means they are legally required to maintain technical documentation, risk management systems, and registration in EU databases—obligations designed for tech giants, not standard enterprises.

Second, even using a compliant model ‘without any modification’ (i.e., no fine-tuning) exposes the company to common risks. Examples include the need for security tests, harm assessments, data pipeline tests, and related incident response protocols.

For completeness, there is a grace period until 2027 if companies are prepared to use ‘old’ models (released before August 2025), but I do not see that being a convenient option in many cases. Basically, there is no free lunch anymore.

  1. In your workshop, you walk through governance from policies all the way to incident response. Why is it so important that these elements are designed together rather than in isolation?

If a company changes a policy, for example, that change should be reflected all the way down to actual testing or incident response procedures. Otherwise, the policy itself is just empty words. All I do in the workshop is bring attention to the connections between the different components involved so they can be as ‘aligned’ as possible. Naturally, changes (in policies, processes, scripts) are never-ending and evolving; however, if you understand the links, it is much easier to align everything fairly quickly and in a cost-effective manner.

  1. You provide participants with real templates, procedures, and even Python scripts. How does this change the way teams approach AI governance after the workshop?

While every team member naturally has their own expertise, I believe having a ‘holistic view’ of all the components of AI governance—and how they interact in practice—can result in much more effective collaboration, efficiency, and output. Misunderstandings, or fundamental questions like ‘why are we doing this?’, can be drastically reduced if each member has at least an overview of what every other member is doing.

  1. How should organisations assess and onboard new AI tools without slowing innovation or creating unnecessary bureaucracy?

This question is very company-dependent. The key is that the AI solution assessment should include ‘everything at scale.’ A true assessment should quantify (or at least estimate) how much the cost of the solution will be when scaled, explicitly including the reshifting of people’s tasks, security, governance, and new/additional infrastructure costs. If that is done ‘right,’ the ROI estimate will be accurate, and therefore the onboarding decision will be correct.

  1. What is your top piece of advice for leaders who want to build AI governance that is compliant, practical, and sustainable over time?

Unless your company is tech-first, if cybersecurity or quality governance frameworks are already in place, integrate AI governance into these existing systems rather than creating new ones.

Regardless, remember that the more complex the pipeline and the scope of the AI solutions, the more complex (and costly) the AI governance on top will be. Hence, try to favor ‘small/narrow scope’ AI solutions as much as you can.

If you are just starting, assuming there is some sort of governance already in place—like data governance—translate roles and responsibilities to the new AI governance and start from there (e.g., DPO → AIPO, data custodian → AI custodian, etc.).


To explore these ideas in more depth, join Andrea Isoni at the Data Governance, AI Governance and Master Data Management Conference Europe, where he will lead the workshop AI Governance: From Policies Drafting to Incident Response on Wednesday, 25 March 2026 in London.

In this hands-on session, Andrea will walk participants through the full lifecycle of AI governance, from identifying and mitigating AI risk to drafting policies, procedures, work instructions, and incident response mechanisms aligned with the EU AI Act and ISO standards. Attendees will work through a real-world example and leave with practical materials they can adapt directly within their own organisations.

If you are looking to move beyond high-level principles and build AI governance that works in practice, this workshop is not to be missed.

Find out more here: Conference

Purchase your tickets here: Tickets

Become a Sponsor
Sponsorship Enquiry
Which of the following are you interested in?
GDPR
Newsletter
Marketing