Enterprise leaders cannot ignore these risks because the potential damage to people and organizations can prove significant, as these two examples show:
- In 2019, Apple's credit card processing practices came under fire for gender discrimination. Danish entrepreneur David Heinemeier Hansson received 20 times more credit than his wife, even though they filed joint taxes and she had a better credit history. 
- The same year, the US National Institute of Standards and Technology (NIST) found that facial recognition software used in areas like airport security had trouble identifying certain groups. For example, some algorithms delivered significantly higher rates of false positives for Asian and African American faces relative to images of Caucasians. 
These are just a couple of examples, among many others, that contributed to a global debate around the use of AI, prompting lawmakers to propose strict regulations. A lack of trust has forced AI practitioners to respond with principle-based frameworks and guidelines for the responsible and ethical use of AI.
Unfortunately, these recommendations carry many challenges (figure 2). For example, some enterprises have developed insufficient or overly stringent guidelines, making it difficult for AI practitioners to manage ethics across the board. Or even worse, other organizations have inadequate risk controls because they don't have the expertise to create responsible AI principles.
At the same time, algorithms are constantly changing while ethics vary across regions, industries, and cultures. These characteristics make rooting out AI biases even more complex.