
The Invisible Layer of AI Risk: Governing Behaviour Between Systems
Jade Emmanuel, Senior Data & AI Consultant, Columbus Global
Over the past few years, organisations have invested heavily in governing artificial intelligence at the model level.
We audit training data for bias. We test for explainability. We validate performance before deployment.
These are necessary controls. However, they are not the whole picture.
A structural shift is underway in how AI systems operate inside organisations. Increasingly, we are no longer deploying isolated models that produce recommendations for humans to review. We are deploying agentic systems — systems capable of setting sub-goals, calling tools, interacting with other systems, and taking autonomous action within defined boundaries.
And critically, these systems do not operate alone. They interact.
When AI systems begin interacting with one another, risk does not disappear. It migrates upwards, from the individual model to the behaviour that emerges between systems.
This is the invisible layer of risk most governance frameworks are not yet designed to manage.
Consider a common operational scenario. One system forecasts demand. Another optimises production schedules. A third monitors quality thresholds. A fourth selects approved suppliers based on constraints. Each system is validated independently. Each passes governance review. Each performs as intended.
Yet, under certain conditions, small local decisions begin to amplify. A demand adjustment triggers a scheduling shift. Quality tolerances are optimised for throughput. A compliant but materially different supplier input is selected. No individual decision violates policy. But collectively, they produce an outcome that was never explicitly designed, and cannot easily be reconstructed after the fact.
No single model has failed. The system has.
Traditional governance frameworks assume that risk can be contained within components. They assume linear decision flows, clear ownership boundaries, and traceable cause-and-effect relationships. Those assumptions weaken when autonomous systems influence one another dynamically.
Emergent behaviour, where systems produce outcomes not explicitly programmed but still consistent with their objectives, becomes structurally possible. Cascading optimisation across systems can quietly push operations outside regulatory boundaries. Tool use introduces additional, often opaque, risk surfaces. Accountability fragments as decisions become distributed across multiple agents.
In this environment, validating individual models is necessary but insufficient.
What’s missing is governance of behaviour.
Governing behaviour means shifting the unit of analysis from “Is this model compliant?” to “How does this system behave under interaction, stress, and drift?” It requires making autonomous decision paths legible, not just outputs. It requires defining intent and boundaries explicitly enough that behaviour can be evaluated against them. And it requires the ability to intervene; not only to stop systems in crisis, but to redirect them before risk accumulates.
This is not a theoretical concern. Agentic patterns are already embedded in supply chains, financial operations, customer platforms, and healthcare environments. As autonomy increases, the surface area of interaction increases with it. The probability of unintended systemic behaviour rises accordingly.
The organisations that navigate this shift successfully will not be those with the thickest AI policy documents. They will be those that treat governance as an architectural discipline, intentionally designed into how systems operate and interact, not applied retrospectively when incidents occur.
Although bias remains a critical area of concern, the next frontier of governance risk lies elsewhere: in the dynamics of autonomous systems influencing one another in ways that are difficult to anticipate, difficult to explain, and difficult to attribute after the fact.
If governance frameworks do not evolve to address this invisible behavioural layer, organisations may find themselves technically compliant yet operationally exposed.
In my upcoming session at the IRM Data Governance & AI Governance Conference Europe 2026, I will explore what practical control architecture looks like when behaviour, rather than models, becomes the unit of risk.
Because as AI systems grow more capable, governance must evolve from auditing components to shaping systemic behaviour.
To explore these ideas in more depth, join Jade Emmanuel at the Data Governance, AI Governance and Master Data Management Conference Europe, where she will present Beyond Bias: Navigating the Systemic Risk and Accountable Design of Agentic AI on Tuesday, 24 March 2026, in London.
In this session, Jade will examine the next frontier of AI governance, moving beyond static model bias to address the systemic risks created by interacting autonomous AI systems. She will share practical guidance on designing accountable architectures, managing emergent behaviours, and establishing governance frameworks capable of auditing and controlling agentic AI.
If you are preparing your organisation for the realities of intelligent automation and autonomous decision-making, this is a session not to miss.
Find out more here: Conference
Purchase your tickets here: Tickets


