Contact Us
  • Article

Banking customers will embrace AI if they trust what's behind the curtain

According to a Genpact report, the number of consumers that were comfortable with companies using AI to access personal data - if it improves their customer experience - jumped from 30 percent in 2017 to 54 percent in 2019.

Businesses in all sectors are embracing artificial intelligence (AI) as it allows them to quickly derive greater insights from the rich customer transaction data it provides. And consumer sentiment about businesses' use of AI is quickly changing. In fact, Genpact research found that the number of consumers that were comfortable with companies using AI to access personal data - if it improves their customer experience - jumped from 30 percent in 2017 to 54 percent in 2019.

That's both good and bad news for banks. On the one hand, it means their customers are far more open to technology helping them better their financial situation by learning, for example, areas in which they might be able to save money every month.

On the other hand, customers may feel their privacy has been invaded if they receive pop-up offers for financing options while they're in the process of buying a home.

So, what can banks do to gain more trust from their customers about AI? The key is in developing an ethical framework that is based on transparency and education.

Trust starts with transparency

Receiving pushed product offers may alarm customers and erode trust. This is important for banks to keep in mind, particularly as they become increasingly active participants in the “experience economy" – meaning that instead of simply selling a commodity-based mortgage, banks offer financial betterment that helps customers understand what they can afford, the types of mortgage products that would be most beneficial, implications on cash flow and other bills or debts, etc.

In this type of AI-enabled experience – which factors into the entire financial context – banks must be transparent about the data they have, the insight it gives them, and how they can use it to help their customers in an ethical manner. Put a different way, banks need to explain to their customers how and why they're prescribing solutions.

Another aspect of transparency and trust in a bank's AI environment is traceability and trackability. This means the ability to go back to the exact point in time when a decision was made, and determine why it was made. For example, when a customer applies for a loan, the bank should be able to provide more than a “yay" or a “nay" and the credit score that supports the answer. Without giving away its loan-scoring “secret sauce," the bank should be able to explain factors that its AI capabilities used to make the decision, such as business tenure and gross assets. And if the financial institution declined the loan, it should explain what steps the customer can take to improve approval chances.

Security and education are essential

Banks invest enormous amounts of money to defend themselves from cyberattacks, cybersecurity, and penetration testing. But the amount they spend meant little to the 143 million customers around the world who were victims of the 2017 Equifax hack or the 2018 theft of 944 million rupees from India's Cosmos Bank.

Forward-thinking banks clearly and concisely educate their customers on the security measures they take, and publish what they spend on security.

Bias is preventable

Not surprisingly, consumers continue to be concerned about bias. Indeed, Genpact research found that 78 percent of consumers say it's important that companies take active measures to prevent it, and 67 percent are apprehensive about potential discrimination when robots make decisions.

And the reality is that an AI algorithm is only as good as the data it uses, and teams' unconscious biases can creep in when there's little diversity among their members.

But banks can help mitigate this issue by making sure to consider the necessary oversight when programming the solution. They can develop a series of rules to prevent the machine from making the same mistakes a human would make, like issuing a loan inappropriately or denying mortgages to people within a certain demographic segment.

And this is good news for banks and customers alike. For example, in the past, a loan officer might overlook creditworthiness if he or she knew the customer. But when trained effectively, AI can't ignore such binary decisions.

There's little denying AI's benefits to both banks and their customers. To make sure the technology can deliver its expected value, banks must develop an ethical framework of transparency and education to gain and retain their customers' trust.

This article was authored by Mark Sullivan, Global Business Leader, Banking & Capital Markets, Genpact and was first published in Money Control.

Visit our artificial intelligence page