Artificial intelligence (AI) is playing a key role in the digital transformation of financial services. But while there are many benefits, there is also a risk of significant harm, according to a new report from AI and data science research organisation, The Alan Turing Institute. 

The adoption of AI in financial services is underpinned by three main areas of innovation: machine learning (ML), non-traditional data, and automation. This means there are distinct challenges in play, it says.

  • The performance of AI systems depends on the quality of the data used, but data quality problems can be difficult to identify and address.
  • Machine learning models can have characteristics that set them apart from more conventional ones, including opaqueness and non-intuitiveness.
  • The adoption of AI can be accompanied by significant changes in the structure of technology supply chains, increasing complexity.
  • It can also have impacts at a much larger scale than conventional ways of doing business.

On top of these, there may be ethical concerns about some AI systems’ performance, legality, regulatory compliance, competent use, and oversight. 

The explicability of decisions made by ‘black box’ technologies, firms’ ability to respond to customer requests for error correction, and any social or economic impacts associated with these issues are all problem areas.

In financial services, one of the key applications of these technologies is in anti-money-laundering (AML) and the prevention of financial crime. For example, AI and machine learning could help financial institutions to identify crimes much earlier, or spot unusual patterns of behaviour that might suggest criminal activity.

However, AI-powered mechanisms could also lead to unwarranted denials of service to consumers, warns the report.

“In the context of know-your-customer (KYC) procedures during the onboarding of new customers, for example, customers may be turned away due to mistaken identity or due to models with excessive false positive rates.

“In the context of transaction monitoring, false positives can lead to customers mistakenly being denied the execution of transactions or withdrawal of funds.”

Real-world stories

Transform Finance knows of a customer who recently attempted to buy a low-cost music accessory – a guitar strap – from one of the UK’s largest ecommerce sites for music equipment. The product included the words ‘Persian Gold’ in its name, denoting a decorative pattern on the item.

The transaction was automatically blocked by his payment provider on suspicion of money laundering, terrorist financing, or another financial crime, leading to an investigation of the customer and temporary suspension of his services. 

Despite repeated written requests by the customer, whose services were eventually restored, the payments platform refused to provide any explanation for the bizarre decision, but it seems likely that it was triggered by the product name. No other purchase from the same portal or any other outlet has ever been blocked by his provider.

Problems such as this are both absurd and unlikely to be isolated. 

Transform Finance knows of another customer who was mistakenly identified as an absconding council tax debtor, simply by having a similar name to one and having recently moved to a new home. 

The customer had moved to an address in the same town as his previous one, where he had lived for a decade and could prove his own payments were up to date. Meanwhile, the actual debtor had lived 100 miles away and had absconded months earlier, owing a substantial sum.

A crime prevention and debt collection algorithm mistakenly confused two different people with a very common name, based on almost no other data.

But for the innocent customer, the consequences were disastrous: he was repeatedly refused permission to open a bank account for the new limited company he had just set up – a facility that was essential for the venture to trade. 

Every high street bank in the UK refused to offer him even basic banking facilities, even though his own finances were in excellent order and he had no record of previous failed businesses or criminal activity.

In the end, he had no alternative but to close his start-up and begin working as a sole trader, purely so his clients could pay him for his work via his existing personal bank account. 

The original error from a badly designed, brute-force algorithm fanned out across a network of interconnected institutions and systems, compounding an innocent customer’s problems, and worsening his credit rating with each new refusal. 

Over the next few weeks, he found he was unable to enter into any form of credit agreement as his problems snowballed.

He told Transform Finance, “I was treated like a criminal. The manager of one bank even told me that, had it been up to him, he would have opened the account for me, as he could see, from hard evidence that I provided – proof of my old address, bank statements, orders, and so on – that I was telling the truth. But the bank’s computer system wouldn’t let him.

“One irony is that, had I actually been a criminal who had just been released from prison, I would have had more options. Another is that I was not asking to borrow money on the new account, just for the most basic banking facilities, so denying me those based on a credit rating is ludicrous.

“My experience is that today’s banking system is entirely run by automated decisions, and even at the most senior level human beings are unable to intervene. There is no right of appeal, no mechanism to correct obvious errors, no override. 

“If the computer says no, it seems that all a bank manager can do is shrug and apologise. There must be a better way of handling these issues than locking innocent consumers out of the UK banking system and making managers look like powerless junior employees.”

Market lockout

The Turing Institute report says, “The increased reliance on shared data and systems across firms, potentially facilitated by firms’ reliance on the same third-party providers, could mean that customers affected by an erroneous KYC assessment are not just turned away by an individual firm, but find themselves being locked out across the market.”

Transform Finance’s sources confirm that these fears are a reality.

When it comes to battling financial crime overall, AI could have both positive and negative consequences, says the Institute. On the one hand, AI could enable dramatic improvements in the systems that are used to prevent financial crime, via the use of machine learning, non-traditional data, and/or automation, and by speeding up the detection of anomalies.

But on the other, the vulnerability of AI systems to errors in the source data, to adversarial attacks and data poisoning, or to badly conceived algorithms are all significant weaknesses. Automating a bad decision doesn’t make it a good decision. 

AI could also contribute to occurrences of market abuse, adds the report. “For instance, AI trading systems could draw on information that is material non-public information, resulting in decisions that amount to insider trading. 

“Similarly, AI systems could pursue trading strategies that amount to prohibited forms of market manipulation. 

“Market abuse on the part of AI-enabled trading systems could also occur unintentionally, due to an insufficient understanding of the kinds of information that a system draws on or the strategies that it may develop – caused, for example, by model complexity, model adaptivity, or the use of non-traditional data.”

AI and machine learning could also be deployed, deliberately or unknowingly, to illegally discriminate against individuals or minority groups, warns the report. 

“In addition, there are forms of differential treatment that can be considered unfair without necessarily violating legal non-discrimination requirements. For example, as the FCA’s work has highlighted, differential treatment can be problematic if it is at odds with the interests and needs of vulnerable consumers.