Article

How organizations can develop an AI governance strategy

  • Facebook
  • Twitter
  • Linkedin
  • Email
Explore

Today, many companies are entrusting business-critical operating decisions to artificial intelligence (AI). Instead of relying on traditional, rule-based programming, however, users now have the ability to leverage machine data, define outcomes that matter, and let AI create algorithms in order to provide recommendations to the business.

For instance, an auto insurance company can feed a machine a library of photos of cars that have been totaled, along with data about the make and model of the cars as well as the ultimate payout. In this way, the system can be “trained" to review incidents going forward, and even recommend payouts. This streamlines the review process – a positive for the company and customers alike.

However, with the ability for AI to arrive at its own conclusions, governance over the machines is critical. Was the machine accurate in its review of the accident photos? Was the customer paid the right amount? By taking the proper measures, organizations can gain clarity and ensure they are using these tools responsibly – and for everyone's benefit.

Here are three areas to keep in mind.

Traceability sheds light on machine reasoning and logic

In a recent Genpact study of C-suite and other senior executives, 63% of respondents said that they find it important to be able to trace an AI-enabled machine's reasoning path. After all, traceability helps with articulating decisions to customers, such as in a loan approval.

Traceability is also critical for meeting regulatory requirements, especially with the implementation of the General Data Protection Regulation (GDPR) in Europe, which has affected practically every global company today.

One critical GDPR requirement is that any organization using automation in decision-making must disclose the logic involved in the processing to the data subject. Without traceability, companies may struggle to communicate the machine's logic and will face penalties from regulatory bodies.

The right controls and human intervention remain paramount

By design, AI enables enterprises to review large datasets and facilitate decisions at far greater scale and speed than humanly possible. However, organizations cannot leave these systems to run in autopilot. There needs to be command and control by humans.

For example, a social media platform can use natural language processing to review users' posts for warning signs of gun violence or suicidal thoughts. The system can comb through billions of posts and connect the dots – which would be impossible for even the largest team of staff – and alert appropriate parties. Of course, not every post that gets flagged will be a legitimate concern, so it will be up to humans to verify what the machine picked up.

This highlights why people are still critical in the AI-driven future, as only we possess domain knowledge – knowledge of businesses, industries, and customer intelligence acquired through experience – to validate the machine's reasoning.

Command and control is also necessary to ensure algorithms are not being fooled or malfunctioning. Machines trained to identify certain types of images – such as the auto insurance example mentioned earlier – can be fooled if they are fed completely different images that have inherently the same pixel patterns. Why? Because the machine is analyzing the photos based on patterns, and not looking at them in the same context that humans would.

Beware of unintentional human biases within data

Since AI-enabled machines constantly absorb data and information, it is highly likely for biases or unwanted outcomes to emerge, such as a chatbot that picks up inappropriate or violent language from interactions over time. If there is bias in the data going in, then there will be bias in what the system puts out.

Beforehand, individual users with domain knowledge have to review the data that goes into these machines to prevent possible biases, and from there maintain governance to make sure that none emerge over time. With more visibility, as well as better understanding of their data and governance over AI, companies can proactively assess the machine's business rules or acquired patterns before they are adopted and rolled out across the enterprise and to customers.

At its root, responsible use of AI is all about trust. Companies, customers, and regulatory agencies want to trust that these intelligent systems are processing information and feeding back recommendations in the right fashion. They want to be clear that the business outcomes created by these machines are in everyone's best interest.

By applying some of the techniques discussed in this article, organizations can strengthen that sense of trust. They can also gain a better understanding of the AI's reasoning path, improve how it communicates decisions to customers, support regulatory compliance, enhance command and control, and ensure that they have clarity and can always make the best decisions.

The article, authored by Vikram Mahidhar, business leader of AI solutions at Genpact, was first published in Information Management.