Digital Technology
Mar 14, 2018

Opening the black box: Achieving greater transparency with AI systems

When people speak of a “black box,” they usually mean they don’t understand  how a particular technology works.  Many consider today’s AI systems to be black boxes since even their developers do not understand how the system arrives at its answers.  This post examines this issue and provides recommendations for business decision makers who are evaluating AI systems and want greater transparency.

Dark secret at the heart of AI?

MIT Technology Review called the lack of transparency in some AI “The Dark Secret at the Heart of AI” and described the uncharted territory in which we find ourselves. The article pointed out that we have never before built machines that operate in ways their creators don’t understand. It asked how well can we expect to communicate—and get along with—intelligent machines that might be unpredictable and inscrutable?

MIT Professor Tommi Jaakkola, who works on applications of machine learning, summed it up this way: “It is a problem already relevant, and it’s going to be much more relevant in the future.  Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

ComputerWorld, meanwhile, tackled the issue in “Why Businesses Will Have to Explain their AI,” and CIO asked the question,“How Transparent is your AI?” These articles raise alarms that both consumer sentiment and new regulations will create the need for greater transparency. Consumers, for example, won’t be satisfied to hear their mortgage application was rejected “because our AI system said so.”

The CIO article describes the levels of opacity of different types of AI systems. Deep learning neural networks mimic the inner workings of the human brain. These systems are remarkably complex, embedding the behavior of thousands of simulated neurons, arranged into hundreds of intricately connected layers. By its very nature, deep learning is a particularly dark black box.

By comparison, more transparent AI systems rely on techniques that can be successfully explained. Examples are modestly sized designs that explicitly show how they use data to arrive at the prediction, classification or decision. I will talk more about this type of AI system later.

Held to a higher standard?

Some people maintain that it is okay not to understand how AI systems arrive at their answers.  After all, why we would hold AI to a higher standard when we don’t even understand how the human mind works? Vijay Pande, General Partner at Andreessen Horowitz, a US venture capital firm, told the New York Times that we have “nothing to fear” From the lack of AI system transparency. He argues that doctors often struggle to articulate how they arrived at their diagnoses. That’s similar to an AI black box, which “can’t explain the complex, underlying basis for how it arrived at a particular conclusion.”

For typical AI systems, we develop architectures and data models, then provide it with massive amounts of data. The AI system then formulates answers while continuously learning on its own as it is fed more data sets over time. Mr. Pande believes “perhaps the real source of critics’ concerns isn’t that we can’t see AI’s reasoning, but that as AI gets more powerful, the human mind becomes the limiting factor.”

What business leaders should do

Business leaders must invest to achieve results and increase their competitiveness — the underlying technology is of much lesser importance. Leaders also need to ensure the creation of a compelling end customer experience and compliance with industry regulations. In other words, as I have written before, enterprises need results from AI systems beyond fun and games.

For example, the General Data Protection Regulation (GDPR) is a regulation from the European Parliament that requires an explanation for any automated decision that has “legal significance”. Legal significance will likely mean that opaque algorithms used in AI systems are a liability. As a result, GDPR will force organizations to explain how they use their AI systems for material customer decisions (By contrast, opaque AI will be fine for classifying images, translating speech or text, or certain other tasks).

Important regulatory reasons aside, the cost of a mistake will be another factor which leads the AI industry to evolve beyond building black boxes. For example, if the cost of a mistake is low—like a Google search—greater transparency may not matter. However, if a mistake is costly or more strategic in nature, AI systems will need to explain the rationale for their decisions.

Genpact, where I work, has developed narrow AI systems using computational linguistics and other technologies that avoid the black box problem by augmenting humans—empowering them to make better decisions. Our AI solutions provide greater trackability—that is, they show the critical pieces of data driving the final decision and/or are beyond company-defined thresholds. Our AI systems also mean companies can comply with GDPR and other industry regulations.

In summary, enterprises need narrow AI systems with greater transparency are optimized to improve the end customer experience, comply with industry regulations, and—perhaps most importantly—build trust with humans.  The best researchers are working to develop higher levels of transparency in other types of AI systems, but most experts agree we are a long way from having truly interpretable AI.  Your business cannot afford to wait until then.

About the author

Dan Glessner

Dan Glessner

Vice President, Digital

Follow Dan Glessner on LinkedIn