AI and Transdisciplinarity
Why do you need a philosopher and a biologist on your AI project?
Join the DZone community and get the full member experience.
Join For FreeTransdisciplinarity and AI: two subjects often mentioned in literature, but not very applied. In this article, I will give you reasons why these two points should be addressed more often.
Practice First
There are many examples of the need for transdisciplinarity when it comes to AI. I will first list a few examples and then end with my own personal theory.
The first example is the well-known undue bias in the data ingested by artificial intelligence. We have had some recent examples, e.g., the AI implemented by Amazon to advise on hiring staff. It was discovered that this AI, which had obviously ingested a lot of data, had the bias of disqualifying women and minorities. It is true that the proportion of white men is high in the IT industry, but don't you know someone who is a woman or a "member" of a visible minority? As far as I am concerned, I know people and women from all backgrounds who are excellent professionals.
These biases were introduced by the fact that if it was necessary to generalize, then yes, the typical employee is a white male. But a sociologist would have added that a black woman is likely to be judged more harshly than a white man of equal skill and professional value. An ethnologist will also explain that this is very damaging (if it had to be specified). If you don't put someone with strong sociological skills, you're going to stupidly introduce the data without weighing it and without the ability to say, "Okay, this person really doesn't have the typical profile we're looking for, but it may be worth meeting them."You then have the case of event prediction. How do you correlate data? If you do not have an expert in your sector, you don't know which data are really correlated with each other. There are many examples of incorrect correlation, so I was able to discover that in the south of France, the number of homicides is correlated to the number of babies born in a year named Kevin. The example is, of course, laughable, but if you don't have the right expert in the field, you will make the same kind of mistakes. You just won't know it!
You also have the fact that nature can help you develop your own AI algorithm. Luc Julia — falling upon a book on cat physiology by pure chance — discovered that the "algorithm" of cats to recognize a shape was realized in the exact opposite direction of the algorithm developed then by herself. After "inverting" its code, the shape recognition was 1000 times faster than its previous algorithm.
Finally, you have the example of the famous subject of ethics. Recently, the OPENAI Foundation has developed AI capable of writing a full text by just starting at the beginning of the article and putting in keywords. They decreed that this intelligence would never be made available because it was too dangerous. Okay, I wouldn't spread it out there, but have they based their decision on an ethics committee? Have they shared their questions with people outside their organization? Maybe I'm a little harsh, but on the other hand, haven't we already developed intelligence that raises serious moral questions?
By the way, I can't afford not to mention the case of Microsoft's Twitter bot, a bot that had become racist. If your AI continues to learn from humans, what could be the risk? How do we limit the risks?
The Theory
In my opinion, when you develop AI, you have to ask yourself questions like, "If I had the means of a dream team to help me, who would I ask? Who are the experts? Who knows the subject?" Because for developed AI, you may need a sociologist to understand social mechanisms, an ethnologist to understand the cultural aspects, a philosopher for ethical issues, a biologist to copy the living or even a nuclear expert, if you are developing a predictive AI for nuclear power plants.
The reality is that reality itself is multifaceted and that from the hard sciences to the humanities, those who had a profile often neglected by our societies can become vital to us. So how can we solve this kind of issue easily?
Perhaps we can open up to all these sciences that we have neglected. Because if we are to recreate a virtual truth, we need many experts in real truth.
Let us know your thoughts in the comments section.
Opinions expressed by DZone contributors are their own.
Comments