Addressing the Trust Problem with AI and the Banking Industry

907641_Bitvore images_2_120220

Artificial intelligence suffers to some extent because of a problem with trust, especially related to the banking industry. When it comes to financial decisions such as one’s eligibility for a bank loan, only 25% of consumers trust the opinion of an AI over that of a human. Meanwhile, a survey from the UK shows that only 19% of consumers would let AI take charge of their finances.

This trust problem even extends to banks themselves. Right now, banks are enjoying success when it comes to using AI in non-consumer-facing applications such as fraud detection or risk assessment. Research shows that 80% of fraud specialists who use AI believe that this technology has reduced payment fraud. On the other hand, the trust problem continues to persist—nearly 43% of fraud specialists cite a lack of transparency as being among their concerns around AI.

 

Lastly, bankers themselves have trust issues when it comes to using banks to invest. No one can easily forget the “flash crash” of 2010, in which the US stock market lost around a trillion dollars in just over half an hour before rapidly regaining most of its losses. Driven partly by high-frequency algorithmic trading, the flash has made banks leery of putting unsupervised AI products in charge of any important systems.

 

With all this said, AI has huge potential when it comes to helping banks make money, helping their customers save money, and helping mitigate fraud and other risks. How can AI vendors create products that help banks and banking customers overcome the trust gap in AI?

 

Begone with the Black Box

 

Right now, the largest barrier between banks and artificial intelligence is the fact that many AI products don’t explain why they make the decisions they’re making. Imagine a fraud expert using an AI tool—the product doesn’t tell the user why it declines one transaction and approves another, only that the transaction is declined.

 

This opens up a huge can of potential liabilities for the bank because there’s no understanding of what factor led to that decision. What if the fraud detection is a false negative? What if the AI believed fraud was taking place due to bad training data? What if the dismissal was due to racial, ethnic, or religious bias in the training data? There’s no way to know without months of audits.

 

AI vendors don’t intend to create products that reflect their own biases—but it often happens regardless. For example, If you train an AI based on a dataset that includes mostly white people, people of color may find themselves under-represented in its decision-making process. If this happened at a bank, it wouldn’t just be morally wrong—it would be illegal based on several laws, including the Equal Credit Opportunity Act of 1974 and the Truth in Lending Act of 1978. In other words, it is essential for banks to understand why their software products make the decisions they do.

 

The answer to this is to create AI products that explain their decisions, exposing the correlations that they find between human behavior and their fraud-risk or credit worthiness. This helps ensure that these decisions are ethically and fiscally sound.

 

Creating AI that Explains Itself

 

For bankers to trust AI products, these products need to produce highly interpretable decisions. In other words, they need to be understandable by users who aren’t necessarily experts in machine learning, and they need to provide detailed backup for their explanations.

 

An example of a fraud detection context might be an AI that says, “I believe that this transaction is fraudulent because it was made from a spoofed IP address; here is the IP address and here’s how I know it was spoofed.” A fraud detection expert might not know that much about AI, but they do know why a spoofed IP address might be a sign of fraud. This gives them confidence that the software is making a correct decision that isn’t informed by bias.

 

In actual operation, AI is much more complicated than this, which means that building explicability is more complicated as well. Instead of natural language, your product will most likely generate “reason codes” that function like error codes on a webpage. Instead of identifying the exact reason for a disqualified transaction, they’ll be able to tell you which part of the algorithmic model contributed most to a decision. Your tool might even contain a module that allows it to determine whether algorithmic bias is present.

 

AI vendors have a strong incentive to make sure that users trust their products. Not only are their trust problems as regards their users, but regulators also find themselves looking towards AI. Rather than find themselves regulated into a corner, look for AI companies to invest heavily in interpretable technologies over the coming months and years.

 

To learn more about how Bitvore can help you, download our case study below!

 

Download

Subscribe to Bitvore News Blog Weekly Email

Recent Posts

Archives

See all