As organisations race to adopt AI, one of the biggest challenges is ensuring innovation doesn’t outpace responsibility. At the Data & AI Conference Europe 2025 (13–17 October, London), Paul Dongha (Head of Responsible AI & AI Strategy, NatWest Bank) and Ray Eitel-Porter (Founder, Lumyz Advisory) will deliver AI Governance in Action: A Practical Roadmap from Principles to Implementation. Ahead of their session, we caught up with them to discuss the lessons learned from leading governance programmes at NatWest, Lloyds, and Accenture, the unique risks posed by generative and agentic AI, and what it really takes to build trust and accountability into AI systems at scale.


Conference: Data & AI Conference Europe 2025

  1. From Principles to Practice
    You’re both deeply involved in translating AI ethics and principles into actionable governance. What’s the biggest challenge organisations face when trying to operationalise AI governance frameworks?

It often depends on the industry in which the company is operating. Some sectors, such as financial services and pharmaceuticals, have well-developed risk and compliance programs driven by regulatory requirements. For companies like this, the challenges tend to be in the technical details of evaluating and mitigating the potential harms of AI. On the other hand, for companies which don’t have mature risk and compliance processes, the biggest challenge is typically identifying where AI is actually being used in the organisation and establishing watertight checkpoints to identify all future proposed uses of AI (whether developed internally or procured externally), so that these uses can be assessed.

Small companies often don’t know where to start, or how to start … they may be confused by all the media coverage about AI risks, but at the same time, anxious that their competition is racing ahead with AI and they will be left behind. Learning, upskilling are crucial: what is AI exactly?  How does it work, and what are the real risks it poses? Once that is established, they need an awareness that AI governance is a management challenge, not solved by technological guardrails alone. They have to evolve their governance (people and their roles), their organisational processes (to manage risks), as well as their technology platforms to ensure AI is developed and tested.

  1. Lessons from the Front Lines
    You’ve helped shape governance programs at NatWest, Lloyds, and Accenture. Can you share a key lesson learned from those experiences that other organisations can benefit from?

Ray: Securing cross-enterprise support at a senior leadership level and finding the single sponsor who has sufficient influence and budget to own the program can be challenging, but is critically important for success. You need both strong support across multiple functions as well as a powerful advocate to lead the program.

Paul: Being given a mandate from a senior sponsorship body (ideally the Exec level) or the Chief Data & Analytics Head, to create a dedicated role – Head of Ethics/Responsible AI.  And with a remit to hire a team that sits not as part of Privacy/Compliance/Legal departments, but alongside these functions. The formation of an organisation-wide AI Ethics Council. This plays a significant role in ensuring all use cases have the right level of scrutiny to avoid accidental ethical risks – the north star is ‘just because you can, doesn’t mean you should’ deploy AI.

  1. Traditional vs. Generative AI Risk
    How do the governance needs of traditional machine learning systems differ from those of generative or agentic AI models?

Experts have been researching and developing the potential problems with traditional machine learning for many years: that means we have pretty well-developed technical solutions, and the challenges tend to be in execution. Generative AI and agentic models, however, are still very new and present some totally new types of risk (such as hallucinations) for which we currently have no completely reliable solutions. Added to this, they are evolving rapidly – arguably faster than the risk mitigation techniques can keep up.

Agentic AI exacerbates existing risks (even over and above Generative AI) and brings newer risks, some of which are related to data leaks, data access issues and accountability. As Agentic AI promises the ability to solve more complex business issues (through so-called ‘reasoning’, increased autonomy and multi-task execution), we have to create additional controls to constrain their behaviour and increase transparency.

  1. Tools That Work
    What kinds of technical tools or frameworks have proven most effective in building trustworthy AI systems—without slowing down innovation?

Paul: There are numerous tools to choose from:

Large companies can use existing Cloud provider platforms, which have integrated tooling to mitigate AI risks in the development lifecycle and the AI model deployment phase (by monitoring model behaviour to ensure models stay within guardrails). Amazon AWS provides SageMaker. Microsoft Azure offers Azure AI Studio. Google Cloud offers the Vertex AI platform and Model Garden. Other large-scale vendors have their own platforms (IBM, SAS, for example), and there are many newer specialist vendors emerging that are building AI-specific GRC platforms that can operate alongside all leading AI model platforms’ development environments.

  1. Creating Buy-In
    How can data and AI leaders gain support from senior stakeholders when implementing governance processes that may add friction to AI delivery?

Ray: Governance processes should be designed to help ensure that AI delivers the intended results and value. Data and AI leaders need to communicate this, rather than having other stakeholders perceive governance as a negative, compliance-driven activity. No senior stakeholder wants AI to deliver inaccurate results, cause a backlash because of using offensive language with customers or trigger regulatory scrutiny. It’s the role of AI governance to prevent these and other mishaps, which will undermine confidence in the company and its use of AI.

Paul: One way for data and AI leaders to generate support is through creating learning and development pathways on AI governance and risk management to educate and inform all levels of an organisation. Currently, there is a certain amount of misunderstanding as to what AI is all about, what it can achieve and how it operates. Learning helps demystify AI. 

Another point to note, is to inform people that AI risk management should be part of the entire AI development lifecycle and not just a compliance activity to be undertaken just before go-live. It is not a check-box exercise, but rather, should be threaded through all phases of the development lifecycle: from conception to post-deployment. Any perceived ‘friction’ should not be viewed as an impediment.

  1. The Human Side of Governance
    Governance is not just about policies and tools. What role do training, awareness, and culture play in the success of responsible AI?

As AI becomes embedded in most of a company’s activities and usage of AI tools becomes widespread – for example, Co-Pilot – basic AI risk awareness training across all areas of the company is essential. It’s important to implement technical guardrails for AI but many uses, such as Co-Pilot, are increasingly open-ended and rely primarily on alert and careful user behaviour to prevent potential harms.

Culture is such an important topic to address when it comes to our relationship with AI. We recommend that companies embrace AI as a technology, but not without an equal amount of focus on training and development. Investing in training people on how to use AI in day-to-day work – with Human-in-the-Loop validation as a mandatory step – helps to form a healthy relationship. People need to know AI is not ‘magical’; AI chatbots are often persuasive, but not always right. The cultural shift that might come about through AI needs open dialogue and discussion with employees to help them voice their concerns and for management to respond in a sincere way.

  1. Future-Readiness
    With regulations evolving fast, what should organisations prioritise now to ensure their AI governance is resilient, flexible, and future-ready?

Setting up a robust AI governance program is the critical first step, because it’s much easier to adapt a well-functioning process to new requirements than to build from scratch. That said, it’s also important to design into the AI governance program horizon scanning for changes in regulations and guidance, technology and customer expectations.

The bottom line is that companies that want to use AI have to invest now. That might mean an upfront outlay for technical tooling (to monitor AI for risks) and also hiring one of two people to help mobilise an AI governance program. This will generate more work, of course, so employees have to be ready and willing to take on the additional risk management activities they may not have done before. Access to the right expertise is essential. Although some AI seems essentially ‘free’, e.g. Copilot for Office 365, learning how to use it requires an investment in people.

  1. Beyond the Organisation
    You’ve both engaged in global initiatives shaping the future of responsible AI. What role do conferences like DAI2025 play in bridging the gap between academic thinking, policy, and real-world application?

There has been a big increase in the recognition of the importance of AI governance over the past year or so. It’s great to see this reflected in conferences like DAI2025, including sessions on AI governance. These provide an opportunity for those who are starting on the journey to build their understanding and for others to dig into the latest research and technical solutions.

Conferences like the DAI2025 play a crucial role in bringing together experts to exchange views and share best practices. For participants, it allows them to learn from those of us at the cutting edge of the AI revolution. To also learn the lessons from peer organisations who are further down the AI adoption path, and to help them navigate a healthy path forward and avoid blind alleys.


AI governance is no longer a theoretical discussion; it’s the foundation for building trust and scaling innovation responsibly. Drawing on their deep experience at NatWest, Lloyds, and Accenture, Paul and Ray show how to turn principles into practice without stifling progress.

Don’t miss their session, AI Governance in Action: A Practical Roadmap from Principles to Implementation, on Wednesday, 15 October 2025, 12:10–12:50 PM at the Data & AI Conference Europe 2025, where you’ll gain a clear framework to identify risks, implement safeguards, and lead your organisation with confidence in the AI era.

Check out Ray & Paul’s book “Governing The Machine”: Find Out More

Become a Sponsor
Sponsorship Enquiry
Which of the following are you interested in?
GDPR
Newsletter
Marketing