Blog

The case for a Peter Principle for Machines

Aswin Chandrasekaran Vice President, Enterprise Services Analytics
Print This Page

The news cycles can't get enough of it: machine learning and artificial intelligence have gone mainstream in the last six months. From the runaway success of Amazon's Echo in the last holiday season, to this delightful article on how Google hot-swapped their Translate service, to Tesla's revamp of the self-driving update Autopilot, and Mark Zuckerberg's attempt at becoming Tony Stark showing off Jarvis—if you are into hype cycles, this is clearly the mother of all hype cycles this season. In a season where political news has dominated media more than any other kind of news, the rise of the machines has not been ignored. 

I have noticed a recurring pattern throughout my career: people almost always underestimate the ability of others to perform specific tasks. In fact, this phenomenon is depicted in all its glory in movie after movie. How many times have you found yourself rooting for the underdog? For a protagonist to overcome self-doubt and rise up to a task after being written off by almost everyone? What I find strange in all this talk about machine learning and AI is the total opposite view that we seem to be taking as a society. I believe that we are all guilty of overestimating what these algorithms can actually do, at least at their current levels of maturity. It is quite ironic actually that we are a lot more willing to attribute greater levels of intelligence to machines than what they are actually capable of, but cautious to extend that same courtesy to our fellow human beings.

Is AI here to stay then? Absolutely! The article on Google Translate proves three things definitively:

  1. We now have the compute power to attempt true AI, even if we don't understand the mathematics behind it
  2. We now have companies, in addition to government departments, with resources to fund research in AI with a focused, but short-term, goal of solving a specific problem
  3. We now have the institutional patience and willingness to discover our way to a solution without actually starting with a clear hypothesis

All of which begs the question: What should we expect in 2017?

AI should become increasingly mainstream this year. We should see an increasing number of products and solutions with AI at the core. There will also be a lot more failures than there will be successes. And we will also find that there will be some completely unexpected use cases of AI, like this cucumber sorting story from Japan.

2017 will be the year where algorithms become ubiquitous in products and services we touch and feel every day, beyond the iPhones, Facebooks, and Amazons of the worlds. Algorithms will strive to earn credibility in the eyes of users—and several, in all likelihood, will fail to do so. Predictions using algorithms are only as good as the data provided to them. Ask anyone who has been frustrated in trying to search for something tricky on Google. "Garbage in, garbage out" has never been more relevant in our lives.

It is time for a new law to categorize this phenomenon that is already here. Laurence J. Peter published a concept in management theory in 1969 about the selection of candidates for particular roles, which has since become known as the "Peter Principle." In a nutshell, Peter's insight was that "managers rise to the level of their incompetence." Taking inspiration from this profound statement, I hereby present to you the Peter Principle for machines: "Machines rise to the level of their bias."

I will examine this principle further with a few examples in my next blog. Stay tuned. 

Continue Reading

Ready to
Explore?