Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

AI vs. Machine Learning vs. Deep Learning

DZone's Guide to

AI vs. Machine Learning vs. Deep Learning

Although the title of this post pits AI, ML, and DL against each other, it could actually read AI ⊃ ML ⊃ DL because these are not distinct fields but rather subsets of each other.

· AI Zone ·
Free Resource

Did you know that 50- 80% of your enterprise business processes can be automated with AssistEdge?  Identify processes, deploy bots and scale effortlessly with AssistEdge.

If you go through the technology roadmap of any organization worth its salt, the presence of AI is almost a certainty. You'll also find terms associated with AI like machine learning and deep learning.

In some cases, these phrases are used interchangeably — and not entirely accurately.

So, here is a quick post to help explain some of the terms a little better.

Although the title of the post says AI vs. machine learning vs. deep learning, it should actually read AI ⊃ machine learning ⊃ deep learning because these are not distinct fields but rather subsets of each other. The figure below explains the relationship.

Artificial intelligence systems, as they started off and in their simplest forms, were rule-based and knowledge-based systems. They worked fine in scenarios where the "knowledge" was codified and the system could be represented by a finite set of rules. For example, a game of chess, or "If she says 'Hello,' then say, 'Hi,'" scenarios are good examples.

But, of course, the world is not codified, so these systems were limiting. The majority of knowledge is tacit and informal, so it's near impossible for us to write it down in the form of rules and instructions for a computer to follow. Enter machine learning.

With machine learning algorithms, based on prior data points, the system identifies patterns and is able to arrive at supposedly subjective decisions. Data must be available on the right set of features. "Is this mail spam or not?" (classification) or "How many comments is this article likely to garner?" (linear regression) would be some examples. Since we are supervising the system by feeding it previous data points, with right answers "labeled" as such for it to learn from, it's called supervised learning (another term you are likely to hear a lot).

Where the system, on its own, groups or "clusters" data points based on similarities, without any supervision from us, it is called unsupervised learning. Any kind of clustering exercise would be a good example, e.g. identifying a particular segment of customers expected to respond to a product in a certain way, identifying a certain type of tissue in a given lab culture, etc.

But the problem is identifying all the features that are needed to make the model an accurate representation of the scenario being processed.

The solution plausibly is that the system not only discovers the output based on the representation but also learns the representation itself. This is known as representation learning.

But every representation will have various factors of influence causing variations, and for the system to learn the representation, we need to identify which factors to discard and which to keep. This is, of course, a very difficult task when it comes to complex representations.

So, the solution (which has gained immense popularity since late 2006 and is largely responsible for the revival of interest in artificial intelligence), is representing complex representations in terms of simpler presentations — AKA deep learning. This is done through multiple layers of abstract factors, with each adding to and providing a new representation.

Google Artificial Brain recognizing the video of a cat in 2012 is probably one of the earliest and most talked-about breakthroughs in deep learning. The way deep learning works, for instance, is that given a picture of a cat, it starts by identifying the edge of the face, then identifies features like the eye, nose, and whiskers, and thus arrives at the conclusion that it must be some sort of an animal.

Hopefully, this post gives a little more insight into the terms. Do leave your comments and thoughts — including any better ways to explain the terms. I will be happy to incorporate them.

Consuming AI in byte sized applications is the best way to transform digitally. #BuiltOnAI, EdgeVerve’s business application, provides you with everything you need to plug & play AI into your enterprise.  Learn more.

Topics:
ai ,machine learning ,deep learning ,chatbots ,regression ,classification ,clustering ,supervised learning ,unsupervised learning ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}