
Meet the Speaker: Oyetola Florence Idowu on Governing AI in Healthcare
Healthcare systems around the world are under increasing pressure to deliver better outcomes while managing growing demand. In the UK, the NHS has set out an ambitious vision for preventive, data-driven care, where data and AI help identify risks earlier and support more proactive interventions.
But scaling AI in healthcare requires more than advanced technology. It requires strong governance, public trust, and clear accountability to ensure that AI systems are used safely, ethically, and transparently.
In this Meet the Speaker interview, Oyetola Florence Idowu, Senior IT Business Analyst at NHS SCW and Global AI Delegate with GAFAI, shares her perspective on how AI governance can support the NHS Long Term Plan, the principles needed to build patient trust, and what organisations can learn from global approaches to responsible AI in healthcare.
The NHS has set out a long-term vision for preventive, data-driven care. From your perspective, what role does AI governance play in making that vision achievable?
AI governance is the delivery mechanism that turns ambition into a safe, scalable reality. Preventive, data-driven care relies on identifying risk early and intervening sooner, but that requires AI systems that clinicians and patients will trust, accept, and regulators will support.
Governance ensures safety and clinical assurance, clear validation pathways, monitoring, and escalation when models drift or performance changes. Governance sets expectations for bias testing, inclusion, and ongoing fairness reviews. It must always be clear who owns the AI system, who signs off on its use, and who is responsible when something goes wrong. By doing so, governance reduces uncertainty for frontline teams, so AI becomes something they can confidently use, not something they need to “work around.”
Without governance, AI remains a collection of pilots. With governance, it becomes a trusted capability that can scale across the NHS.
Healthcare relies on public trust more than almost any other sector. What are the most important governance principles for building and maintaining patient trust in AI-enabled care?
Patient trust is built when people feel safe, respected, and informed. In AI-enabled care, the most important governance principles are:
Transparency: Patients and clinicians should know when AI is used, what it is used for, and what it does and does not decide.
Consent and choice: Where appropriate, patients should have meaningful choices (e.g., human alternative where feasible) and clarity on data use.
Fairness and equity: Systems must be tested for bias across demographics and clinical subgroups, with mitigation plans and ongoing monitoring.
Accountability: A clear chain of responsibility, AI does not remove responsibility from clinical leadership; it must be evident who is accountable for safe use.
Privacy and security by design: Strong controls on data access, minimisation, and protection of sensitive health information.
Explainability and contestability: People need the ability to understand and challenge decisions, especially where AI influences access, prioritisation, or care pathways.
Trust is not a one-time approval. It’s maintained through continuous monitoring, open communication, and demonstrating that governance is active – not just a document.
You work at the intersection of healthcare delivery, AI governance, and global best practice. What lessons from international approaches do you think are most relevant for the NHS today?
Three international lessons stand out as immediately relevant:
Risk-based governance works best. Global approaches increasingly differentiate between low-risk automation and high-risk clinical decision support. The NHS benefits from applying “proportionate governance” with stronger controls where potential harm is higher. Governance must cover deployment and continuous assurance, not just evaluation upfront.
Public engagement is a core requirement, not a communications add-on. International best practice increasingly treats public involvement as part of design. In healthcare, trust grows when communities feel engaged early, not informed late.
For the NHS, the key is translating these lessons into practical operational models with embedded governance roles, clear pathways, and consistent assurance standards that support scale across Integrated Care Systems.
How can organisations balance innovation with accountability when introducing AI into highly regulated environments like healthcare?
The balance comes from making governance an enabler, not a barrier. The most effective approach is to innovate within clearly defined guardrails:
Start with purpose and patient safety by identifying clinical and operational problems where AI can add value; define expected outcomes and safety boundaries early.
Use structured pilots: small-scale, controlled deployments with measurable success criteria and patient/clinician feedback loops.
Adopt “human-in-the-loop” practices: clinicians remain decision-makers; AI supports prioritisation, pattern detection, or workflow efficiency.
Continuous assurance: monitor performance, bias, and adoption post go-live not just during development.
Supplier governance and contracts: require transparency on model behaviour, data handling, and updates; ensure accountability for changes.
In short, you don’t reduce accountability to gain innovation; you design innovation so accountability is built in.
In your session, you focus on transparency, fairness, and accountability. How can these principles be embedded into AI healthcare initiatives from the start rather than added later?
Embedding these principles requires shifting from “build then check” to “design with governance.” Practical methods include:
Transparency at requirements stage: include requirements for explainability, user-facing disclosure, and audit logging as non-negotiables.
Fairness by design: define protected groups and equity measures upfront, ensure representative data, and run bias testing before deployment.
Accountability through governance structures: assign named owners for clinical safety, data governance, and operational oversight; document decision rights and escalation routes.
Co-design with users: include clinicians, patients, and IG early, as this avoids usability and trust gaps that are costly to fix later.
Model and workflow integration testing: ensure AI outputs fit clinical workflows and do not create unsafe workarounds.
Ongoing monitoring plan: define post-deployment metrics (performance, equity, adoption) and review cadence before go-live.
When these are built into the delivery lifecycle, like security and clinical safety, they become part of “how we deliver,” not an afterthought.
For healthcare leaders beginning their AI journey, what practical first steps would you recommend to ensure responsible, equitable, and sustainable outcomes?
I recommend starting with a pragmatic five-step approach:
Pick a priority problem with clear value. Focus on a use case tied to outcomes, access, safety, prevention, and workforce workload, not “AI for AI’s sake.”
Get your data foundations right. Ensure data quality, interoperability, and governance ownership. Poor data leads to poor AI and loss of trust.
Establish a minimal governance model early. Assign accountable owners (clinical safety, IG, digital), agree on assurance steps, and define risk approach.
Run a controlled pilot with clear measures. Baseline current performance, set success metrics, include equity measures, and involve frontline users.
Plan for scale and sustainability. Decide how the tool will be supported, monitored, updated, and evaluated long-term and how you will maintain public trust.
If leaders do these five things well, AI becomes a trusted capability that improves outcomes and reduces inequalities rather than a short-lived pilot.
To explore these ideas in more depth, join Oyetola Florence Idowu at the Data Governance, AI Governance and Master Data Management Conference Europe, where she will present Bridging Trust and Innovation: A Governance Roadmap for AI-Enabled Preventive Care in Healthcare on Monday, 23 March 2026 in London.
In this forward-looking session, Oyetola will explore how responsible AI governance can support the NHS vision for preventive, data-driven healthcare. Drawing on her experience as a Senior IT Business Analyst with NHS SCW and a Global AI Delegate with GAFAI, she will outline how organisations can embed transparency, fairness, and accountability into AI initiatives while maintaining the pace of innovation.
Attendees will gain practical guidance on implementing governance principles for AI adoption in healthcare, building patient and public trust in AI systems, and developing data governance strategies aligned with NHS priorities. The session will also highlight lessons from global best practices in responsible AI and how they can be applied within healthcare systems.
If you are working at the intersection of healthcare, data, and AI governance, this session will provide actionable insights for balancing innovation with compliance while delivering equitable, patient-centred outcomes.
Find out more here: Conference
Purchase your tickets here: Tickets


