Contact Us
hero-how-to-create-an-ethical-framework-for-artificial-intelligence.jpg
  • Article

How to create an ethical framework for artificial intelligence

For many businesses, applying artificial intelligence (AI) is not a question of if, but when. According to Genpact's latest AI 360 study, a quarter of senior executives say they plan to fundamentally reimagine their businesses through AI by the end of 2021.

As AI continues to increase its influence on business decisions, governments are taking steps to ensure that all AI usage is ethical. In April 2019, US lawmakers introduced the Algorithmic Accountability Act to check automated decision-making systems for bias. In the same month, the European Union released guidelines for evaluating AI applications for fairness, transparency, and accountability.

These initiatives all point to the need to establish ethical AI frameworks for effective decision-making without misuse of data. With these frameworks, businesses can build trust with consumers, which supports AI adoption, but also brand reputation.

When creating an ethical framework for AI, we need to consider the following:

Take a copy for yourself

The purpose for using AI

As a tool, AI is neither good nor bad. Think of a hammer — you can use it to build a house, or to hit someone over the head. Intended purpose is what separates ethical AI applications from those that are questionable. Businesses should evaluate intended purpose before rolling out an AI initiative. It's also important to continue to monitor the initiative to make sure there is no deviation toward the unethical.

Show and tell

AI decisions impact customers most, and so to be transparent, businesses must be able to explain AI reasoning. Not all AI applications can be fully explained, but some can provide “breadcrumb trails" that will trace a decision back to a single data point.

For example, in commercial lending, AI can take thousands of balance sheets and calculate a risk score dynamically. AI can read all of the text and footnotes, understand their contents in calculating a risk score, and recommend the approval or denial of a loan.

But denying a loan just by taking the machine at its word is not enough. This can lead to a poor customer experience and compliance concerns. Businesses need applications with tracking so that they can drill down to the specific document or footnote that led to the recommendation. Then they can show customers and auditors exactly where and how the system came to its decision. Such transparency creates trusts between all parties.

The potentials for bias

Our AI 360 study shows that a large majority (78%) of consumers expect companies to proactively address potential biases and discrimination in AI. Tackling bias starts with recognizing where it comes from – and that's often data and people. Enterprises have to watch out for both.

For instance, if an HR department uses personnel data from a homogeneous group for recruiting, then the AI algorithm will likely be biased toward that initial sample. Therefore, it might only recommend similar people for new positions.

Furthermore, since machines are trained and tuned by people, the algorithms may replicate the unconscious biases of the people working on them. When teams lack diversity, it's easy for the thinking of a select few to influence AI decisions.

As part of ethical AI frameworks, business leaders must encourage diversity. The goal is to have complete and comprehensive data samples that can cover all scenarios and users to eliminate bias. If we lack comprehensive data, external sources of synthetic data can fill in the gaps. Likewise, we should aim to have diverse teams with a wide range of skills and backgrounds, including digital and industry talent – or better yet, people who can think from both sides. A diverse team can form an ethics committee that looks for unethical use or unwanted outcomes.

Safe and sound

Ethical AI relies on secure data. If data is insecure, businesses run the risk of corruption that can skew outputs. To test the security of its vehicle systems – and thereby the safety of drivers and passengers – Tesla held a hacking contest, promising cash prizes and a new car to anyone able to hack its Model 3. Tesla then used insights from the winning team to develop a security update. Similarly, businesses should take active measures to protect the data used by AI applications and look for new vulnerabilities.

Security ties back to the need for governance. While our AI 360 study shows that 95% of companies say they are taking steps to combat bias, only a third of them have the governance and internal control frameworks required to do so. For AI to provide ethical benefits for all, we need the ability to monitor the relevant models.

One potential solution is to use a visualization dashboard. A dashboard can provide a single view of all automated operations. It's then easier to monitor AI applications, robotic process automation, and other intelligent automation for performance, security, and fairness.

Frameworks need to be put in place to deliver ethical AI strategies. Businesses can then clearly explain how they use and secure data, monitor AI activities, arrive at decisions, and control unwanted scenarios. Ultimately, this protects customers and stakeholders alike, and builds trust in businesses and their use of technology.

A version of this point of view originally appeared on Information Management.

Visit Genpact Cora