AI Must Not Change Us
When it comes to ethics and AI, the simple questions should be asked!
Join the DZone community and get the full member experience.Join For Free
I have already shared some thoughts with you recently on the subject of artificial intelligence and ethics. Many things are said — rarely very profound — about my personal point of view. But what if the key is to think about our relationship with AI?
AI is not human
I would like to get to the heart of the matter. AI must under no circumstances alter the otherness of the human being, and it must not replace it!
Human beings have creativity, imagination, and intuition, but also mistakes, biases, capacity for destruction, and hatred. We are a bit like a data lake, and when we mix different individuals together, we get amazing results. Innovation, value creation, etc. have all been achieved by human beings. It is true that we expect the same from the AI, but are we sure that we will achieve the same wealth? And the "defects" of human beings, although very regrettable, also allow human beings to continue to learn. We are, therefore, as much about machine learning as we are about the datalake in all of us!
So, if we hope to replace a number of actions and decisions with AI, we lose that richness that is in the relationships between people and in people individually. After all, what's the point of thinking if the machines do it for us!
AI is not moral
I approach here from a rather "productivist" angle, i.e. if AI replaces us, we would be a useless zombie species in the end, but there is also the moral issue. To the question of which should the autonomous car crush if it had to choose between killing an old man or a child, some would like the AI to decide according to a certain moral plan. But do you, instead, act according to your morality? You would certainly act according to your primary instinct, and you would only have the possibility to manage one or two pieces of information. And besides, this question is, in my opinion, totally horrible. Because in addition to managing such a case of conscience, this question implies that we would no longer risk being responsible for morality! Put a government in there that would decide on good morality, and you can write an episode of Black Mirror that is pretty chilling!
In short, we have everything to lose by letting AI take over our own identity as human beings, and it is up to us to be vigilant. Or else our lives will be of little value.
Opinions expressed by DZone contributors are their own.