There's a lot of handwringing (everywhere) around the idea that increasingly smarter computers will soon render human employees obsolete. There are many reasons why this is unlikely to happen, but one of the most critical is that humans and machines are simply better together.
In Cognitive Collaboration, a Deloitte University Press paper, authors Jim Guszcza, Harvey Lewis, and Peter Evans-Greenwood reminded us of what famous cognitive scientist J. C. R. Licklider articulated about artificial intelligence many years ago.
"Rather than speculate about the ability of computers to implement human-style intelligence, Licklider believed computers would complement human intelligence," wrote the Deloitte authors. "He argued that humans and computers would develop a symbiotic relationship, the strengths of one counterbalancing the limitations of the other."
Humans Identify Problems for AI to Solve
How might this work? Well, we already see it today when we use apps like Google Translate and Waze. Humans specify goals and criteria, and algorithms do the heavy lifting of data to offer up the most relevant insights and options to aid in decision-making. In many cases, we've already identified the sweet spot where artificial intelligence can add the most value, and that involves a large data set on which one wishes to perform a routine task.
But when it comes to a novel situation or problem, you need a human to formulate hypotheses and decide which ones to test because, as the Deloitte authors pointed out, algorithms lack the conceptual understanding and commonsense reasoning needed to do anything more but make inferences from structured hypotheses. Human judgment is absolutely required to keep algorithms and their output in check.
"In no case is human intelligence mimicked; in each case, it is augmented. It turns out that the human mind is less computer-like than originally realized, and AI is less human-like than originally hoped," Guszcza, Lewis, and Evans-Greenwood wrote in Cognitive Collaboration.
Human Judgment Is Imperfect but Essential
Cognitive scientists might like to think that AI decision-making processes are based on human ones, but this is far from the case.
"Rather than laboriously gathering and evaluating the relevant evidence, we typically lean on a variety of mental rules of thumb (heuristics) that yield narratively plausible, but often logically dubious, judgments," said the Deloitte paper. "We let our emotions cloud our decisions and overgeneralize from personal experience. Minds need algorithms to de-bias our judgments and decisions as surely as our eyes need artificial lenses to see adequately."
Not only are artificial minds less biased, but they don't fatigue, they apply consistent effort regardless of circumstance, they can pull out the most relevant ideas in big data systems in mere seconds, and they can simultaneously examine so many sources that making an accurate prediction about a particular future situation is a piece of cake.
Notice, though, that I said less biased, not unbiased. The Deloitte authors cautioned us to avoid outsourcing tasks associated with fairness, societal acceptability, and morality to AI systems. Algorithms cannot be assumed to be fair or objective simply because they use hard data, and oversight is required.
"Recent examples of algorithmic bias include online advertising systems that have been found to target career-coaching service ads for high-paying jobs more frequently to men than women, and ads suggestive of arrests more often to people with names commonly used by black people," the Deloitte paper shared. And: "If the data used to train an algorithm reflect unwanted pre-existing biases, the resulting algorithm will likely reflect, and potentially amplify, these biases."
In other words, the solution to eliminating bias is not to trust that AI will do it on its own, but to teach humans how to do it so we can mold smart machines in our own new and improved image. And this means, of course, that we still have our jobs.
Where do you think automation is headed?