Author: Dr. Janet Bastiman, Chief Data Scientist, Napier AI

The financial services industry is now witnessing artificial intelligence being used in many touch points of a customer journey – from opening a bank account, to inform spending habits and credit behaviour on neobank apps, to financial crime compliance.

 

 

The challenges of ‘black box’ AI models

Using AI for AI’s sake is not a luxury this sector can afford. Across different industries. According to S&P Global Market Intelligence, in 2024, 42% of firms reported they had scrapped most of their AI initiatives in a given year (vs. 17 % previously).

AI systems, especially those deployed to detect money laundering, have both power and risk. Ineffective or poorly implemented AI systems can lead to operational disruptions and errors in decision-making, ultimately leading to unfavourable outcomes for end customers.

While modern AI models can help to pick up subtle, evolving criminal tactics that traditional rules-based systems may miss, without transparency and accountability, those same systems can become impossible to audit. False positives may waste resources or wrongly implicate clients; false negatives may let wrongdoing slip through. Either scenario risks regulatory non-compliance, legal consequences, financial loss, and reputational damage.

This poses an issue for financial institutions using AI systems for compliance and monitoring purposes, as they must ensure accountability and transparency in the outcomes generated.

Implementing explainable AI for compliance

Regulators are increasingly insisting on transparency. For example, under the EU AI Act, organisations must develop AI systems in a way that allows appropriate explainability. Explainable AI for screening and monitoring in anti-money laundering means not only knowing what an alert is, but why it was generated. If analysts can’t understand why alerts are flagged, or if false positives overwhelm them, the system becomes more of a burden than an aid.

This is where approaches such as human-in-the-loop are critical: incorporating human judgment into the setup, tuning, and testing of algorithms at different stages of development and implementation.

By making sure that decisions made by AI systems are both traceable and justifiable, we can also ensure AI models don’t unfairly target certain groups due to unrepresentative datasets it was trained on. How do we do this?

 

    1. Define clarity from the start: What is the problem to be solved? What level of explainability is acceptable given the risk? Who needs to understand model behaviour (auditors? external regulators? internal compliance?)

 

    • Rigorous testing, validation, monitoring: Not just before deployment, but continuously. Monitor false negative and false positive rates; check for bias; ensure that the model’s decisions remain interpretable over time.

 

    • Human-in-the-loop and governance structures: Even with high automation, decisions of consequence require human oversight. Clear accountability should be defined on who signs off on model decisions, who handles escalations, and who handles errors.

Ultimately, the promise of AI in financial crime compliance lies not just in speed or scale, but in trust. Institutions that embed transparency, accountability, and ethical safeguards into their AI frameworks will be better positioned to stay ahead of criminals, satisfy regulators, and protect customers.

The lessons we learn in fincrime compliance apply far more broadly. In every industry adopting AI, from financial services, healthcare and insurance to retail and energy, the same themes of transparency, accountability, and fairness are vital. Without them, AI runs the risk of creating unintended harms, introducing bias, or eroding the trust of customers and regulators alike. Ensuring fairness and explainability will determine whether AI becomes a technology that genuinely serves society, helping industries innovate safely, sustainably, and with lasting public trust.


Join Janet at the Data and AI Conference Europe 2025

Dr. Janet Bastiman’s work shows that responsible AI isn’t just about compliance; it’s about building systems that people can trust. With deep experience at the intersection of data science, ethics, and regulation, Janet champions explainable AI that’s transparent, accountable, and effective in real-world financial environments.

Don’t miss her keynote, “Responsible AI in Financial Services – Lessons From the Front Line,” on Wednesday, 15 October 2025, 2:10–2:50 PM, where she’ll share practical insights on how to implement AI that drives innovation without sacrificing trust.

You can also catch Janet earlier that day on the panel “Ethical AI in Finance: Balancing Innovation, Trust, and Regulation,” from 11:20 AM–12:00 PM, alongside industry leaders from NatWest Bank and Castlebridge, exploring how to navigate the fine line between progress and responsibility in financial AI.

Secure Your Spot: Here

 

Become a Sponsor
Sponsorship Enquiry
Which of the following are you interested in?
GDPR
Newsletter
Marketing