Article

Education and transparency in artificial intelligence in banking

  • Facebook
  • Twitter
  • Linkedin
  • Email
Explore

While artificial intelligence (AI) has been decades in development, its penetration into mainstream business and banking culture has hit a tipping point. AI has become the new banking buzzword with a 'there's an AI for that' environment emerging.

Banks are using AI to better understand customers' needs and to be more prescriptive and relevant with the offerings and advice that they bring to their customers to help them.

For example, many voice-enabled banking bots originally responded only to simple questions, such as “What is my routing number?" or “What is my balance?" Customers can now ask these same bots more sophisticated questions geared toward helping them attain financial betterment, such as, “How much am I spending in a certain category?" or “Where can I save money next month?" There's a richness of customer transaction data, and the insights that can be derived from it, that is improving over time.

Still, many consumers report being hesitant about their data being used for artificial intelligence purposes. For example, recent Genpact research found that 64% of UK consumers have concerns around companies using AI that accesses their personal data, even if it improves their customer experience.

The key to promoting trust and increasing AI adoption is the development of an ethical framework that entails transparency and education.

Transparency equals trust

If a customer is shopping for a car, and her bank suddenly offers her an auto loan, that could represent a timely, value-added service. On the other hand, it could raise questions in the mind of the customer about why the bank knows that she is shopping for a car or how the bank knows so much about her.

Merely pushing products can alarm customers and erode trust. This is important, as bank brands and trust have been challenged in recent years. Rather than proactively using data to push products, banks must be transparent with customers about the data that they have, the insight it provides for them, and how they can use that insight to help customers in an ethical way.

Simultaneously, banks are evolving from manufacturing and marketing products to facilitating experiences. For example, banks are moving from simply selling a commodity-based mortgage to facilitating a home buying experience. The later entails helping a customer understand what he can afford, the types of products that would be the most beneficial to him, the implications on his cash flow and other bills or debts, etc. Banks aim to help customers not only with individual financial products, but with the entire financial context that surrounds them.

In doing so, banks are starting to use artificial intelligence to understand who their clients really are, what their needs are, and what products might help them to achieve their objectives so that they can make great choices for their own financial betterment. As part of this evolution, banks must also try to take the next step to explain how and why they're prescribing the solutions that they're prescribing to customers.

Traceability and tracking ability are also part of transparency. Traceable and trackable AI allows us to go back to the exact location where the decision was made and determine why it was made. When a customer applies for a loan, the bank should be able to provide more than a “yay" or a “nay" and a score. Without giving away the loan-scoring model that is “secret sauce" of the bank, the bank should be able to identify broad parameters such as business tenure, gross assets, etc. that were used in the determination regarding the loan and educate the customer about how they could be approved if they were declined.

Traceability and trackability promote trust and increase adoption of AI.

Security is key

With AI, customer data is used to make critical decisions. So it is important to make sure that the information is secure. Data security is key for banks. Banks spend heavily on cyber security. And the one thing many banking CEOs fear most is a data breach.

One need only look at the fallout that resulted from the 2017 hack of Equifax, one of the world's largest credit rating agencies, which affected some 143 million consumers around the world, to understand the magnitude of the implications of security in the marketplace.

Banks invest enormous amounts of money in a defense of cyber attacks, in cyber security, and penetration testing. As part of the education process to their customers, banks often explain how secure they are and publish what they spend on security.

A word about bias

Much has been said about AI bias. And data and team bias does exist. After all, the algorithm is only as good as the data it uses. And, without diversity, the unconscious biases of teams can creep into the model.

But with AI, you can also prevent bias by programming in regulatory oversight. You can develop a series of rules to prevent a machine from making the same mistakes a human would make; for example, from issuing a loan inappropriately. In the past, without AI in the mix, a loan officer might overlook creditworthiness if he or she knew the customer. With AI, there are binary decisions about creditworthiness that can't be overlooked.

As banks around the world begin to apply AI to transform their business, a framework of transparency will drive adoption and help build trust between consumers and banks.

This article was authored by Mark Sullivan, global business leader, banking & capital markets, Genpact and was first appeared in Financial Director.

Visit our artificial intelligence page

Learn more