By Donald Farmer, Principal, Treehive Strategy

This article by Donald Farmer explores a fundamental shift in how organisations should approach artificial intelligence: moving from isolated projects to strategic roadmaps. It summarises ideas from Tobias Zwingmann, and Chris Walker and argues that the high failure rate of AI initiatives is not due to the technology itself, but to how companies structure their investments and expectations.


A Summary of ideas from Tobias Zwingmann

  • The standard approach—funding a single AI project to justify itself is prone to failure. First-time AI project failure rates are 70–95%. A single failure can collapse the initiative.
  • Fewer than 10% of companies successfully scale AI. Many never map the broader picture, funding only one project and retreating after failure.
  • Alternative approach: Map the total AI opportunity across the organisation before approving projects. Lead with operational pain points, not AI capabilities.
  • Threshold filter: Only pursue problems with a minimum value (e.g., $10,000/month). Deferring smaller issues preserves focus while not discarding them.
  • Example: A firm spends six person days/month on content review at $2,000/day → $12,000/month in lost billable work → qualifies for the roadmap.
  • Roadmaps make failure of individual projects less risky; the organisation simply moves to the next opportunity.
  • Prioritise fast-to-ship initiatives, sequence larger opportunities through small, self-contained steps, and let each stage fund the next.
  • Roadmaps are programs of learning, building conviction, capability, and credibility over time. Executives should ask not whether a project will work, but how much AI could genuinely be worth to the organization.

Tacit Knowledge and the SaaSpocalypse (From Chris Walker)

  • Triggered by Anthropic’s Claude Legal release coinciding with $285B loss in SaaS market capitalization.
  • Despite automation, demand for forward-deployed engineers (embedded in client organizations) surged 800% in 2025.
  • Tacit knowledge: From Michael Polanyi – “we can know more than we can tell.” Expertise often cannot be fully articulated.
  • Tacit knowledge cannot be digitized; AI cannot learn what cannot be expressed in data.
  • Even if AI handles routine tasks, human work increasingly focuses on tacit-knowledge-intensive problems.
  • Predictions: By 2028, either AI will handle complex workflows without humans, or companies with deep embedding practices will thrive.
  • The SaaS market is being reshaped: survival depends on embedding deeply enough to access tacit knowledge, not just producing code.

How IBM Granite Became a Leader in Responsible AI

  • IBM Granite family of language models scored 95% on Stanford’s Foundation Model Transparency Index, 23 points above the next-best model.
  • Transparency and ethics were designed from the start.
    • Indemnified models against copyright claims.
    • Open-sourced weights under Apache 2.0 license.
  • Data pipeline tracked 10 petabytes of training data, full lineage, and provenance; restricted to authoritative US/EU sources.
  • Automation tools:
    • Data Prep Kit – deduplication, filtering, tokenization, scalable infrastructure.
    • DiGiT – reduces fine-tuning from 3 months to 3 weeks, generates targeted training data for specialized tasks.
  • 80 explicit safety policies govern sensitive topics, each accompanied by teaching examples.
  • Human contributors supplement synthetic data for diversity, especially in safety and multi-turn conversational tasks.
  • Independent ISO 42001 certification and external validation underline process discipline as the source of Granite’s transparency.

Reflections

  • AI projects should focus on cumulative learning and value across the organisation.
  • Tacit knowledge remains a critical differentiator; AI complements human expertise rather than replacing it.
  • Responsible AI requires embedding ethics, transparency, and governance into design from day one, not retroactively.
  • IBM’s Granite demonstrates that disciplined process, rigorous data management, and human oversight can deliver measurable leadership in ethical AI.
Become a Sponsor
Sponsorship Enquiry
Which of the following are you interested in?
GDPR
Newsletter
Marketing