Colleagues and Copilots

Donald Farmer, August 2025

What happens when your most data-driven executive disagrees with your most experienced manager … and they’re both consulting different AIs?

That’s the reality in many meetings today, and most of us aren’t ready for it. AI is moving from a silent assistant in the background to an active participant in decision-making. The opportunity is great, but so is the risk; without a clear design for how humans and AI work together, decisions can become less accountable, less transparent, and ultimately less effective.

For years, AI adoption strategies have followed a predictable playbook: identify a task, automate it, measure efficiency gains. While that approach works for routine, low-stakes processes, it breaks down in environments where context, judgment, and values matter.

In high-stakes domains, from financial risk assessment to healthcare diagnostics, replacing human oversight with automation is not enough. Efficiency without sound judgment creates dangerous blind spots; in fact, when AI decisions aren’t explainable or accountable, gains in efficiency can quickly turn into risks with regulators or even your reputation.

Most importantly, while AI is primarily about processing information and making optimal decisions, human intelligence includes attention, perception, imagination, and moral or ethical vision.

This is where hybrid intelligence comes in: a purposeful approach to designing systems where human expertise and machine intelligence work in concert, each doing what they do best.

Successful hybrid intelligence systems rest on three core design principles:

1. Decision Mapping

Not all decisions are created equal. Some benefit from AI speed and scale; others demand human intuition and ethical reasoning. In short, different types of decisions require different types of intelligence. Mapping decision types to the right human–AI configuration prevents automation from eroding trust or quality.

2. Information Flow

In the best hybrid systems, data, analysis, and context move freely between human and machine actors. That means the AI understands enough about human priorities to frame outputs usefully — and humans have enough visibility into AI reasoning to validate or challenge it.

3. Governance and Accountability

Every AI-assisted decision should have a clear chain of accountability. This means building explainability and audit trails into the system from day one, not bolting them on after problems arise.

Consider a hospital triage example: AI models may rapidly analyze symptoms, history, and lab results to recommend next steps. But the human medical team still evaluates patient communication, nonverbal cues, and ethical considerations before making a final call. That combination drives both speed and trust.

To train teams in how to model these scenarios and to make this hybrid approach work for them, I put together a workshop focused on this new approach to decision-making.

In Colleagues and Copilots: Rethinking Decision-Making with Hybrid Intelligence, participants will build a framework they can take back to their organisations immediately.

Through interactive mapping exercises, you’ll chart your organisation’s most critical decision points and identify where AI copilots may add genuine value. You’ll design collaboration “handoffs” that define when humans lead, when machines lead, and when they decide together.

I also believe in tackling governance head-on: how to create explainable systems that satisfy both technical and business requirements; how to build audit trails that stand up to scrutiny; and how to balance speed with accountability in real-time decision-making.

One central concept we’ll explore is what I often call analytic dignity: ensuring that AI integration doesn’t displace human expertise but instead elevates it. This means shifting some professional roles from repetitive, task-based work toward higher-order, insight-driven contributions.

So, I hope you’ll consider this workshop. You’ll leave with:

  • A customised decision architecture map for your organisation.
  • A governance model tailored to your risk and regulatory environment.
  • A 90-day implementation plan with clear milestones and resistance-mitigation strategies.

Hybrid intelligence isn’t a far-off concept. It’s already shaping strategy today. Organisations that get this right will make faster, more resilient decisions, avoid costly AI missteps, and retain the trust of customers, regulators, and employees.

The role of leadership is pivotal. Left to evolve without clear direction, AI adoption can fragment decision processes, erode accountability, and deepen cultural divides between “AI believers” and “AI skeptics.” With the right architecture, leaders can ensure that AI copilots strengthen, rather than weaken, the organisation’s decision-making fabric.

So we face a design challenge: how to harness both human judgment and machine efficiency in a partnership where both thrive and where your organisation can navigate complexity with clarity, speed, and confidence.

If you’re ready to move beyond the automation mindset and into a future where your human teams and AI copilots work side-by-side to make better, faster, more accountable decisions, this workshop is your next step.


🎤 Donald Farmer is set to transform how you think about AI collaboration at The Data and AI Conference Europe, 13 – 17 October 2025, in Central London — not once, but twice!
Get ready for two powerhouse sessions from one of the most respected voices in analytics, strategy, and AI.

Workshop: Colleagues and Copilots: Rethinking Decision-Making with Hybrid Intelligence
🗓 Monday, 13 October 2025 | ⏰ 1:45 – 5:00 PM
📍 etc.venues Fenchurch Street, 8 Fenchurch Pl, London

Closing Keynote: Colleagues and Copilots – Rethinking Decision-Making in the Hybrid Intelligence Era
🗓 Tuesday, 14 October 2025 | ⏰ 4:40 – 5:20 PM
📍 etc.venues Fenchurch Street, 8 Fenchurch Pl, London

From practical frameworks for human-AI decision workflows to bold new ways of thinking about “analytic dignity,” Donald’s sessions will give you the strategic insight and actionable steps to integrate AI copilots into your organisation with confidence and clarity.

Don’t miss the chance to learn from one of the most innovative minds in the field.

Become a Sponsor
Sponsorship Enquiry
Which of the following are you interested in?
GDPR
Newsletter
Marketing