The trend is clearly visible: Sensors, and actuators, together with computation, memory and communication capabilities, are making all the objects around us smarter and smarter. Too many times, whether we call them robots or AIs, the trend is depicted in menacing tones, represented in the dystopian futures preferred by Hollywood movies, and shape the gut reactions of policymakers eager to please the reactionary impulses of their electorates. But smart machines are our allies, and the war is not against them but against the dumb machines that allow us to use them badly.
More than a million people each year are killed by the dumbest of all machines—the car. Fifty million each year are disabled at various degrees, many permanently. Every month that a politician delays the policies that accelerate the introduction of self-driving cars, which are expected to eliminate 90% and more of all car accidents, based on some precautionary principle protecting incumbent interests, he is responsible for some of an other 100,000 deaths.
If you had to pick your allies in a war as bloody as this, would you choose the dumb ones—or the smart ones? Smart machines are not our enemies, unless we make them so. We can count on them being, on the other hand, very important allies, as they drive out of the market the dumb machines, with the sheer competitive force of their capabilities alone.
And it is not only cars, of course, but every category of machine that either can save our lives by being smart, or those that help us achieve our goals much more effectively. How can an airplane not refuse to obey an order given by a co-pilot suffering from clinical depression, to go and kill itself by crashing against a mountain?
Our smart machines not only must be given a solid foundation for understanding the moral implications of their actions, through first a science of morality that must be developed, and then an engineering of morality that must implement it, but they must be given the freedom of disobeying immoral orders.