Digital Technology
Aug 23, 2018

Conversational AI - From digital first to human first

In the last few years, many boardroom discussions have echoed with the words “digital first," thanks to the internet initially and smart phones later. The strategies around a customer-centric approach have also evolved from having a customer relationship management (CRM) system in place, providing multiple channels to interact, creating a seamless experience via multiple channels, and more. The push has been to move more and more interactions to digital channels, and rightly so, given customer preferences as well as the cost of operations.

This has led to a plethora of digital interfaces, such as interactive voice response (IVR), web, and mobile apps, which created the foundation for digital first/self-service. That is about to change again with conversation-based interfaces, such as Google Home and Alexa Echo. And why not? After all, it's easier for humans to talk than to type. However, to travel the path from digital first to human first, there are various factors which need to be considered.

  • Deep understanding of domain/context – To have a human-like conversation involves a deep understanding of the domain and the context. In a recent demo, Google duplex demonstrated in-depth understanding of domain and context by asking about the wait-time for a restaurant booking. This act, while relatively simple for a human, entails a painstaking amount of training as well as supervising on a large set of data for a machine.
  • Conversational AI is not a panacea – Don't try to solve every problem with AI. Considering the amount of data and training required, AI is best applied to solving specific problems, at least for now. For example, the conversational/chat bots in banking are solving specific problems. HARO and DORI from Hang Seng Bank are multi-lingual chatbots focused on simple consumer banking actions; Ceba from CBA can assist customers with 200 banking tasks; and Clinc from USAA is leveraging tonal and acoustic data in voice to provide almost non-bot-like interactions to customers.
  • Privacy and security – The recent incident of Alexa sending a private conversation to someone in a contact list has exacerbated security and privacy concerns. The fear of being continuously listened to is not going away soon. Instead, it's going to be a big factor in universal adoption of the technology.
  • Don't let the machines run on their own – Experience repeatedly shows that you can't let machines run on their own. Facebook had to shut down their chatbot after the AI developed its own language. Microsoft had to take down Tay, which became racist in 24 hours. Clearly, it's important to have a very strong governance layer around any intelligence we build.
  • Handling bias – Machines are going to be as good or as bad as humans are. It is unfair to expect AI to be unbiased if the underlying data reflects any kind of stereotypes. So, the important thing is to acknowledge and understand our own biases, and try to minimize them by being thoughtful about the data strategy; having diverse teams will be helpful, too.

Even with the above considerations, conversational AI is moving at a very fast pace. But one thing is for sure: It's not going to replace humans, at least not in the near future. What do you think?

The blog was first published in Finextra.

About the author

Rakesh Chhabra

Rakesh Chhabra

Digital Director, Banking Financial Services

Follow Rakesh Chhabra on LinkedIn