Look before you leap: Understanding the limitations of AI in the regulatory space

Look before you leap: Understanding the limitations of AI in the regulatory space blog image

Last week I went to the Commodity Future Trading Commission (CFTC) in Washington DC to attend the Fintech Forward 2019 conference. I was grateful for the opportunity to speak on a panel about AI in the 21st Century Marketplace: Exploring the Role of AI and Related Fintech in CFTC Markets.

Artificial Intelligence (AI) is an exciting tool that has shot to prominence in the last few years, with applications spanning the gamut from Snapchat filters and Spotify recommendations to sophisticated systems that catch fraudulent transactions or see the signs of lung cancer before expert radiologists. But how will AI impact the regulatory compliance space?

AI and data quality: don’t get too excited

At Kaizen we’re primarily concerned with helping firms check the data quality of the transaction reports that they have to submit under a variety of régimes from around the world such as MiFIR, EMIR and the Dodd-Frank Act. We have to be careful not to be too optimistic about the role AI can play in this. AI is most useful when we don’t exactly know the logic that a computer programme should be following, so we need the AI to ‘learn’ it from the data. But with regulatory reporting, the rules are clearly laid out – meaning that rule-based “expert systems” are the right technology for testing reports.

Yet, despite the seemingly straightforward nature of accurate reporting, at Kaizen we’ve found that firms can commonly make errors in as many as 90% of records. Firms should be wary of “running before they can walk” by increasing the complexity of their reporting pipelines with AI while they are still making basic mistakes setting up their reporting systems.

But AI can help with market integrity

Where AI can really come into its own in regulatory compliance is market integrity – particularly safeguards against market abuse such as insider trading. Under the Market Abuse Regulation firms are required to monitor, detect and report suspicious transactions that take place on their platforms. Unlike with regulatory reporting, there are no hard-and-fast rules of what abusive trades look like. This is where the power of AI and machine learning (ML) can be brought to bear, learning how to find the anomalous needles in the gargantuan haystack of reported transactions.

AI goes rogue

At FinTech Forward I also commented on challenges that AI might pose for regulators and market participants. One big challenge regulators may face is the question of accountability: who is to blame if an AI ‘goes rogue’? The notion of a super intelligent AI that understands its actions still belongs in science fiction; contemporary AIs are really just mathematical models that extrapolate patterns, so the idea of holding an AI responsible itself is silly. It isn’t only the programmers who determine how an AI will behave; the AI can be heavily influenced by matters like what data fields the AI can ‘see’, what training data is in scope, and what metric for success the AI will use. Fundamentally, accountability for an AI need not be different from any other automated computer system.

The challenges of algorithmic bias

Another challenge for regulators is the issue of algorithmic bias – when an AI can behave unfairly in a systematic way, due to unintended problems with the training data or the question that was posed. One famous example is a recruitment AI that Amazon scrapped because it was unintentionally disfavouring women. Regulators that use AIs for enforcement have to be careful to minimize algorithmic bias. It isn’t only individuals who might get caught up. As a hypothetical example, if regulators only trained their AI on data from Tier 1 investment banks then they might end up unfairly over-flagging transactions from hedge funds.

Black box models

One problem for market participants who deploy AIs is the lack of interpretability for “black box” models. State-of-the-art AIs can have millions of parameters and it’s impossible to know what they’re doing, and this means it can be hard to quantify the risks. Moreover, the inherently adversarial aspect of market participation means that AI complexity is more of a risk than it would be in collaborative contexts such as self-driving cars – a counterparty’s AI might learn to persuade a black box AI to misprice securities, for example. Few market participants deploy AIs without a human in the loop.

In the future, we’ll see AI play an ever increasing role in our lives, including finance and regulation. But as long as the technology is still in its infancy it’s important to temper the hype by understanding the limitations. Deploying AI systems has the potential to add huge value, but as always, firms should look before they leap.

Watch Sam in action at Fintech Forward 2019.