DZone
AI Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > AI Zone > Interview of Luc Julia: “There Is No Such Thing as Artificial Intelligence”

Interview of Luc Julia: “There Is No Such Thing as Artificial Intelligence”

Co-founder of Siri and Samsung's Vice President of Innovation Luc Julia has just published his book. According to him, the way AI is presented is often wrong.

Thomas Jardinet user avatar by
Thomas Jardinet
CORE ·
May. 20, 19 · AI Zone · Interview
Like (4)
Save
Tweet
7.50K Views

Join the DZone community and get the full member experience.

Join For Free


Image title

In your book There is no such thing as Artificial Intelligence, you deconstruct what is for you a myth, or a misunderstanding, that what we call artificial intelligence does not exist. What do you think separates AI from intelligence, whether it is human or in the broadest meaning?

AI is constructed and controlled by humans. It is based on rules (expert systems) or on data (machine and deep learning). The rules are set by us, and of course the data the models are built from exist. Human intelligence invents and creates new data that has never been seen before and break the rules to do so.

And if I were to be candid, what would prevent AI from becoming intelligent in the long term? Is automated learning only one of the building blocks of a future intelligent AI?

Again, AI is learning upon whatever is known at time T. Humans, in addition to learning are inventing whatever will exist at time T+1. The current techniques of AI do not invent. They do not think. Descartes said “Cogito ergo sum,” “I think therefore I am.” AI doesn’t think, AI doesn’t exist.

So you warn us about technological fantasies, both technically and practically. What do you think they are actually?

I’m a big fan of science-fiction, but in that there is fiction. Hollywood is great, but it’s not reality. I do not want people to be disappointed by the technology because we promised things that are not achievable. Or to create unrealistic fears. I just want people to be aware and educated about the possibilities and the limits of the technologies. The previous “AI winters” came because the science didn’t deliver what was promised, we need to avoid a new AI winter to continue on all the good things that the current technology, which is just in its infancy, can deliver. Being educated also allows us to understand the potential dangers of technologies. Not talking about irrational danger. Like any other tool, AI can be misused, on purpose or not, and can hurt us in the process. But like any other tool, this is our responsibility to use it the right way, and it’s not the AI that will decide by itself how it can be used...

Concerning the autonomous car, it will never exist with the AI we know today, which is made of mathematics and statistics, expert systems or neural networks. It is indeed impossible to cover all the cases that an autonomous car may encounter because it will only cover cases that have already been seen. This does not prevent me from preferring to give my children the autonomous car, because it will be safer than them, especially if they are in a situation of fatigue for example. And there are cases that are impossible to handle for an autonomous car. I could mention two funny cases  :

  • Human driving on the very large roundabout of the “Champs Elysees” in Paris. The driving is not like anywhere else, because there is no rule that applies. We negotiate, we look each other in the eye, we go for the feeling. At this roundabout, the autonomous car would simply not move.

  • Waymo: The Google subsidiary has driven its cars 11 million miles. However, Waymo's cars continue to be surprised, as in the case of two people sitting on the sidewalk with a stop sign on their shoulders. And then the autonomous car stops! No one can imagine this kind of situation, but it can happen. And when the autonomous car stops, the driver of a normal car will know that he can continue.

As for Elon Musk, he is certainly a marketing genius, but he doesn't understand anything about AI. He is not the one to rely on to understand AI. He knows how to sell and oversell, has visions that are not necessarily bad, but he must return to reality.

And what about China? Will they become the new AI Giant everybody says?

Of course, China has lots of potential. Lots of people, lots of well-educated people who can develop like any other countries some very interesting techniques. The advantage they currently have could be, on the regulatory side, some more permissive rules allowing them to get more (private) data that other countries are not allowing themselves to collect.

This easier access to data allows them to make rapid technological improvements. So even though the AI took off later than in Europe and the USA, and so they start from further away, they will make progress and do great things.

But the US or Europe have also extremely talented people that will allow them to stay in the race.

The USA and Europe have indeed the experience, allowing to tweak the algorithms well. There is still a lot of expertise on which they can rely.

In your book, you talk about multidisciplinarity, mentioning that you drastically improved an AI algorithm after reading a book on cat physiology. Should this multidisciplinarity be a leitmotif of the practical applications of AI? To what extent should we cultivate this multidisciplinarity?

I believe in multidisciplinarity because I believe that in order to break the rules, you need to look at the problems, or domains, with fresh eyes that someone from a different field will bring to that domain. I believe that discomfort brings new ideas.

Multidisciplinarity is what will bring out the AI of tomorrow, with always full of data and use cases,

If we want to get closer to a real intelligence, to a general intelligence, it will never exist if we do not change the paradigm. Whereas until now an AI approach has been completed with multidisciplinarity, it will be necessary to start with multidisciplinarity. Indeed, biology, mathematics, quantum physics will be involved and it is, therefore, necessary to examine these fields in detail. It is important to note that we only know between 20 and 30 percent of the brain, and therefore there is a risk of starting from a "fake".

You refer to the film Idiocracy, which echoes in my mind Asimov's writings, where the character Salvor Hardin surrounds the technology with a mystical aura that only the founders have the secret, and to Clarke's third law, which stipulates "Any sufficiently advanced technology is indistinguishable from magic." However, if I understand you correctly, for you between overestimating the scope of AI, and the fact that it is nothing more than a tool, the fears around AI are exaggerated. Can you tell us more about it?

The fears are often exaggerated because they’re making better stories in the media or, as I said earlier, in Hollywood. I’m not saying we shouldn’t be afraid, I’m saying that we are in charge and that we’ll have to regulate to ensure the good use of this tool.

But isn't there still a risk that some countries, having another ethic to our democratic motto, will use AI and Big Data in proportions and uses that would bring dramatic changes?

Isn’t that the case with other tools such as nuclear power? The USSR and the USA came up with the SALT agreements in the 70s because fundamentally humans are survivalists.

We always end up regulating, like the knife we used in cave times for mammoths, which, when we discover that we can kill our neighbor with it, is regulated in its use.

There is a lot of talk about ethics and AI, but little about scientists' initiatives on the subject, such as "Holberton-Turing oath" and "Data for Good", can you tell us a little about them?

These initiatives are basically the physicians’ Hippocratic Oath for AI scientists. Promising to practice their discipline in an ethical way. Again, regulations will come from all different levels, from some small groups of people, corporations, countries or even the world.

These are beginnings, it took a long time for the Hippocratic Oath to be institutionalized. All these initiatives are first regulations, it is certain that they will evolve over time.

How do you see AI in the coming years, whether on a purely scientific level or a practical one?

We’re just at the beginning, even by using the current 60-year-old techniques, mainly because the explosion of data. Medicine is going to go through a revolution because of the statistically prone DNA for instance and because of even more progress around medical imagery. Transportation is also going to be much more secure with more sensors and less accidents. Our everyday lives will be augmented by these tools that will make things just easier, where the objects around us will all act as assistants.

AI Big data Julia (programming language) Interview (journalism)

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Key Design Elements for IoT Sensors
  • Deploying Java Applications to AWS Elastic Beanstalk
  • Comparing Distributed Databases
  • Troubleshooting HTTP 502 Bad Gateway in AWS EBS

Comments

AI Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo