Ethics in AI: When Will We Progress?
What if ethics in AI are not being studied the right way? What if the relationship between AI and humans is what we need to be decoding?
Join the DZone community and get the full member experience.Join For Free
Ethics of AI is a subject we often hear about, however, I have a feeling that we're not taking the subject seriously or taking it in the right direction.
A Subject That Is Not on the Right Track
When the European Union says that there must be 7 main ethical principles around AI, the first of which is respect for human rights, we could laugh. Would that not be a basic principle, an overarching principle that anything in western democracies is supposed to respect? And when Google creates an ethics committee that includes a person with homophobic and xenophobic beliefs, even though these positions violate Google's charter, do we really believe that ethics is taken seriously?
A Poorly Defined Subject
I hear a number of moral questions about the use of a medical AI to encourage us to use sponsored drugs or an AI that will be the new and only interlocutor for an after-sales service that will dehumanize the relationship. But, of course, it seems immoral to many people! But let's think for a second that it didn't exist:
- Would you accept that a doctor prescribes a drug just because it is paid for by a pharmaceutical company?
- Would you accept that the complex problem you have with an after-sales service remains managed by a person who would make no effort to imagine a solution to your problem?
These questions do not need an ethics committee to get answers. It is not, for the moment, the AI that is responsible; AI is just the catalyst of these moral problems, and it's useless to call upon ethics because ethics is not morality but the framework for reflection to decide morality.
For the questions below, the answer will very often be that it is immoral when someone tells you that free enterprise and productivity are inherently moral. Everyone has their own opinion about the morality of AI, and the need for ethical reflection is actually based on what makes AI unique.
The Uniqueness of AI
The full singularity of AI lies in its relationship with man. Indeed, even though artificial intelligence is not intelligent in itself, it is intended to replace human beings and to be in a full relationship with them.
This ethical debate should have taken place. When I advise a client on what he can do to improve his productivity via IT solutions with the consequent risks in terms of jobs, I try as much as I can, and it is not easy to ask them if this is not the opportunity to improve even more the quality of service to the client, but also the well-being at work (you can admire the positivism of a consultant). Automation has already had the opportunity to raise these questions, but the debate has not taken place.
However, artificial intelligence can potentially go even further in the sense that it can replace not necessarily low-skilled jobs, but jobs that require extensive intellectual work, while questioning and talking to us as if we were dealing with a human being. We can ask a few questions to an ethics committee:
- Is it up to AI to decide which person an autonomous car should run over? To ensure that AI behaves humanely, is it the best thing to leave it to chance? Asking if we should choose the old man or the young teenager makes my hair stand up and really makes me think in this hierarchy of the worst totalitarianisms of the past.
- What moral logic should govern a possible police robot? Should this moral logic be voted on by the people? Or should these possible police robots not interfere with the moral field, which is specific to human beings?
- Doesn't giving the keys to AI to make decisions mean the risk of dehumanizing us by effectively removing our capacity for reflection and imagination, which AI is unable to do?
- Should a scientist rely on AI tools when he or she must explain and demonstrate everything?
Because a human being is a person, a father, a mother, a brother, and a sister, and because humans are capable of reflection, sensitivity, imagination, and creativity, should we lose all this to AI?
We are just beginning to understand the impact of smartphones, including the impact on the brains of young children, so where are we on AI?
Ethical reflection must be carried out with technicians, philosophers, sociologists, doctors, etc. and not with people whose media existence is based on their immoral concerns. A first step has been initiated by the IEEE organization through their initiative "Ethics in action," which must be completed and probably amended.
Who's getting started?
Opinions expressed by DZone contributors are their own.