Article

How to create an ethical framework for artificial intelligence

  • Facebook
  • Twitter
  • Linkedin
  • Email
Explore

For most large enterprise leaders, the question of applying Artificial Intelligence (AI) to transform their business is not a question of if, but when. Almost all Fortune 500 companies are seeing applications of AI that will fundamentally change the way they manufacture products, deliver goods and services, hire employees or delight customers.

As AI becomes increasingly involved in our personal and professional lives, governments and enterprises alike have started to take steps to provide an ethical framework for the use of AI, such as the American AI initiative, the Algorithmic Accountability Act in the US and the EU guidelines for evaluating AI applications in areas such as fairness, transparency, security, and accountability.

All of these initiatives underscore the need for enterprises to establish their own ethical frameworks for AI. Such frameworks ensure that AI continues to lead to the best decisions, without unintended consequences or misuse of data and analytics. Ethical use can help build trust between consumers and organizations, which benefits not only AI adoption, but also brand reputation.

In the development of ethical frameworks for AI, we need to factor in the following principles:

Intended use

One of the most important questions to ask when developing an AI application is, "Are we deploying AI for the right reasons?". You can use a hammer to build a house or you can use it to hit someone. Just like a hammer, an AI tool is neither good nor bad. It's how you use it that can become a problem.

AI can do a lot of good- it can improve and speed up the decision-making process for approving a loan, an insurance claim or a hire, which leads to more positive customer experiences. In a previous article, I discussed how HR departments can use AI to review job descriptions to prevent bias and be more inclusive in the hiring process. Enterprises should incorporate an initial ethical evaluation of the intended use before rolling out the AI initiative, as well as continuously monitor these models to make sure we don't deviate towards unethical uses.

The intended use, as well as relevant data used to feed algorithms and outcomes, should also be fully transparent to the people impacted by the machines' recommendations. In California, a new law will go into effect in July 2019 that states chatbots must disclose that they are an automated system to avoid misleading users. Beyond simple disclosure, explainability is increasingly required, especially in regulated industries.

We have not yet achieved full explainability yet but some applications of AI can show the “breadcrumb" trails, revealing which data point led to a specific decision. For example, in commercial lending, we can take 5,000 balance sheets today in two different accounting standards and two different languages and can calculate a risk score dynamically – reading all the text and the footnotes, understanding the content to apply a score to each balance sheet.

But if you just provide the score to the company requesting a loan – that won't be enough. The AI applications have to have tracking ability built in, so you can click and drill on all the documentation to find the footnote at the beginning of the page where the data used for the decision is sitting.

Transparency and explainability promote trust in how machine learning models and increase adoption of AI.

Avoiding bias

There are two different sources of bias: data and teams. Enterprises have to watch out for both. For instance, if you use employment data from a homogeneous population and try to recruit based on that data set, your algorithm will be biased towards the sample that you know, and might recommend only resumes from that sample.

The algorithm may be solid but if the data is flawed the outcome will be too. Further, since machines are trained and tuned by people, the algorithms may replicate the unconscious biases of the teams working on them, especially if the teams lack diversity. It is easy for the thinking of a select few to unknowingly seep into the algorithms.

As part of an ethical framework for AI, enterprises need to proactively encourage diversity to prevent biases from manifesting. The goal is to have complete and well-rounded datasets that can cover all possible scenarios and won't have negative impacts on groups due to race, gender, sexuality, or ideology.

If we lack comprehensive data, external sources of synthetic data can help fill in the gaps. In parallel, we should aim to have teams with diverse skills and backgrounds to work on developing and training the algorithms, as well as to look for questionable ethical use, and monitor for unwanted outcomes. These people can serve as a digital ethics committee.

Security and governance

AI is only as good as the data used in training. If we are using customer data to make critical decisions, we have to make sure the information is secure to prevent possible tampering and corruption that can alter the output at the detriment of other people. To ensure the security of its vehicles' systems—and thereby the safety of drivers and passengers—Tesla held a hacking contest in March 2019, promising cash prizes and a new car to anyone able to hack its Model 3. Tesla then used what the winning team did to develop a software update to address the vulnerabilities. Enterprises should evolve the security protocols around their data and applications, to ensure that the data used in AI initiatives is secure.

Security ties back to a larger need for governance over AI systems. Findings from Genpact's latest AI 360 research show that 95 percent of companies are taking steps to combat bias, but only a third have comprehensive governance and internal control frameworks to do so.

For AI to deliver on its promise, in an ethical, beneficial way, more governance frameworks need to be in place, including continuous oversight to see that AI models do not deviate from their intended use, introduce or develop bias, or expose people to danger. A potential solution is to develop a “visualization dashboard" to oversee automated operations. A dashboard can provide a single view to monitor all AI applications, robots, and other intelligent automation for performance—plus fairness.

As governments around the world move to create a more ethical future around AI, we have to make sure we have proper frameworks to clearly explain how we use and secure data, monitor AI activities, arrive at decisions, and generally control for unwanted scenarios. Ethical use of AI becomes not only a matter of regulatory compliance, but also a way to protect customers and cement their trust in our companies and technology.

This article was authored by Sanjay Srivastava, chief digital officer, Genpact, and first published in Information Management.

Visit our artificial intelligence page

Learn more