The potentials for bias
Our AI 360 study shows that a large majority (78%) of consumers expect companies to proactively address potential biases and discrimination in AI. Tackling bias starts with recognizing where it comes from – and that's often data and people. Enterprises have to watch out for both.
For instance, if an HR department uses personnel data from a homogeneous group for recruiting, then the AI algorithm will likely be biased toward that initial sample. Therefore, it might only recommend similar people for new positions.
Furthermore, since machines are trained and tuned by people, the algorithms may replicate the unconscious biases of the people working on them. When teams lack diversity, it's easy for the thinking of a select few to influence AI decisions.
As part of ethical AI frameworks, business leaders must encourage diversity. The goal is to have complete and comprehensive data samples that can cover all scenarios and users to eliminate bias. If we lack comprehensive data, external sources of synthetic data can fill in the gaps. Likewise, we should aim to have diverse teams with a wide range of skills and backgrounds, including digital and industry talent – or better yet, people who can think from both sides. A diverse team can form an ethics committee that looks for unethical use or unwanted outcomes.