
Meet the Speaker: Cal Al-Dhubaib on Trust Engineering and Responsible AI
Trust engineering and responsible AI are becoming essential as organisations deploy increasingly autonomous and agentic systems. In this Meet the Speaker interview, Cal Al-Dhubaib shares how teams can design AI systems that balance innovation, accountability, and human oversight in real-world enterprise environments.
- AI is becoming more autonomous. What is the biggest trust challenge organisations face as agentic systems begin to make and influence enterprise decisions?
The biggest trust challenge with agentic AI is that these systems are probabilistic, which means errors are inevitable to some extent. That is not a defect. It is how they work.
Their reliability also depends heavily on the quality and context of the data they have access to, and because the inputs are open-ended, it depends on how well users understand the system and use it appropriately.
Agentic systems raise the stakes because they operate across multiple steps. A small mistake early in the process can propagate through a workflow and compound into a much bigger issue, especially when the system is allowed to take action without the right checkpoints.
The good news is that this is manageable. The organisations that succeed will treat trust as an engineering discipline and design systems that balance between human oversight and machine workloads.
- You introduce Trust Engineering as a practical discipline. How is this different from traditional AI governance or risk management approaches?
Traditional AI governance relies heavily on policies and controls and is often treated as a step taken later in development.
Trust Engineering is a different way of framing AI risk management. It treats trust as both an engineering and design problem, in addition to traditional risk management and governance.
It provides teams with a toolkit to align data science, engineering, product, and risk stakeholders on how an AI system is expected to behave in the real world. That includes ambiguity, exception management, and human-AI collaboration.
The goal is not just to create oversight. The goal is to design AI systems that can be deployed responsibly, used correctly, and produce accountable outcomes at scale.
- Why do data, product, and risk teams so often struggle to align when it comes to AI risk and responsibility?
The challenge is that data, product, and risk teams are aligned on the goal of reliability, but they approach it through different lenses.
Data science focuses heavily on model and data quality evaluation, which is important, but there are many valid ways to evaluate models, and those methods depend heavily on business context. Risk teams tend to focus on controls and oversight. Business stakeholders are often deep domain experts, but they are not always equipped to translate that expertise into clear requirements for probabilistic AI systems.
The bigger issue is that these groups often lack a shared vocabulary for the uncertainty inherent to AI. They may not be aligned on how to manage ambiguity, handle exceptions, or balance human-AI workflows.
When organisations lack that shared language, governance becomes cumbersome rather than a mechanism to accelerate innovation.
- In your keynote, you focus on human behavior as well as algorithmic challenges. Why is this a more useful way to think about trust than just model performance?
AI trust is often framed as a model problem, but in practice, it is just as much a human behaviour and workflow design problem. The real question is not whether the system will fail, but where it is most likely to fail, and how to build resiliency around those points.
Most risk comes from the long tail of edge cases where the model is less reliable. I’ve seen many scenarios where businesses think about AI in a binary way: either the process is fully automated, or it is not.
The result is usually one of two extremes. Either teams enter a costly development cycle chasing edge cases, or end up with significant overhead in system oversight.
A simple example is an agentic workflow for routine document processing, like insurance claim review. Some claims will be straightforward and highly automatable. Others, like complex multi-condition cases, may be exactly where the system fails most often. A small design shift can make a huge difference: detect those higher-risk cases and route them to a human-only queue while allowing the AI to handle the rest. And yes, AI can be used to predict when a model is more likely to fail!
The same applies to user behaviour. Sometimes the best risk reduction is not a better model, but a better context. Imagine an auto insurance claim workflow where customers upload photos to support automated processing. If you provide simple guidance and example images of what “good input” looks like, users are far more likely to use the system correctly and avoid unexpected outcomes.
These kinds of design interventions are cost-effective, improve reliability, and reduce risk without requiring endless marginal improvements to model performance.
- Your Applied Trust Engineering workshop uses real AI incidents and hands-on tools. What will participants be able to do differently after attending?
Participants will leave the workshop with a practical ability to ask better questions about risk, accountability, and system behaviour, even if they are not governance professionals. They will learn how to articulate trust concerns and translate them into concrete workflow and design decisions.
The workshop also equips participants with the ability to:
- Analyse real-world AI incidents using the AI Incident Database
- Apply the RISKS framework to diagnose trust issues
- Apply the four pillars of Trust Engineering to identify practical interventions
- Collaborate more effectively across engineering, data science, and risk teams around both AI development and adoption
- What advice would you give to leaders who want to move fast with AI while still building systems that people can trust?
Your enterprise AI journey does not have to be difficult. Many of the risks can be addressed by reframing the problem and aligning the critical teams across design, engineering, and risk management.
Think about AI adoption in terms of human-AI collaboration. That makes it easier to articulate the total cost of ownership, clarify how value is created, and reduce unnecessary development costs.
Most importantly, treat trust as a product feature and a design requirement, not just a compliance checkbox.
To explore these ideas in more depth, join Cal Al-Dhubaib at the Data Governance, AI Governance and Master Data Management Conference Europe, where he will deliver the keynote Building Trust in the Era of Agentic AI on Tuesday, 24 March 2026, followed by the half-day lab Applied Trust Engineering on Wednesday, 25 March 2026 in London.
In his keynote, Cal will introduce Trust Engineering as a practical way for data, product, and risk teams to align around how agentic AI systems are designed, evaluated, and governed, balancing innovation with accountability. His workshop then takes these ideas into practice, using real AI incidents, hands-on tools, and shared frameworks to help teams reason about failure modes, risk, oversight, and human–AI collaboration.
If you are responsible for deploying AI at scale and want to move beyond abstract principles toward practical, trust-by-design systems, these sessions are not to be missed.
Find out more here: Conference
Purchase your tickets here: Tickets


