{{ !articles[0].partner.isSponsoringArticle ? "Platinum" : "Portal" }} Partner
java,enterprise-integration,computer science,tips and tricks

So You Think You Can Polymorph?

In the true spirit of this blog I am going to take the complex idea of polymorphism and make it as simple as possible.


Now you may already think you understand polymorphism—and perhaps you do—but I’ve found that most software developers don’t actually understand exactly what polymorphism is.

What is polymorphism?

How many times have you been asked this question during a job interview?

Do you actually know confidently what the right answer is?

Don’t worry, if you are like most developers out there in the world you probably have this feeling that you know what polymorphism is, but are unable to give a clear and concise definition of it.

Most developers understand examples of polymorphism or one particular type of polymorphism, but don’t understand the concept itself.

Allow me to clarify a bit.

What I mean by this is that many times when I ask about polymorphism in an interview, I get a response in the form of an example:

Most commonly a developer will describe how a shape base class can have a circle derived class and a square derived class and when you call the draw method on a reference to the shape base class, the correct derived class implementation of draw is called without you specifically having to know the type.

While this is technically a correct example of runtime polymorphism, it is not in any way concise, nor is it a definition of the actual term.

I myself have described polymorphism in a similar fashion in plenty of job interviews.

True understanding

The problem with just that example as an explanation is that it lacks true understanding of the concept.

It is like being able to read by memorizing words, while not understanding the concepts of phonetics that underlie the true concept of reading.

young boy reading

A good test for understanding a concept is the ability to create a good analogy for that concept.

Oftentimes if a person cannot come up with an analogy to describe a concept, it is because they lack the true understanding of what the concept is.

Analogies are also an excellent way to teach concepts by relating things to another thing that is already understood.

If right now you can’t come up with a real world analogy of polymorphism, don’t worry you are not alone.

A basic definition

Now that we understand why most of us don’t truly understand polymorphism, let’s start with a very basic concise definition.

Polymorphism is sharing a common interface for multiple types, but having different implementations for different types.

This basically means that in any situation where you have the same interface for something but can have different behavior based on the type, you have polymorphism.

Think about a Blu-ray player.

When you put a regular DVD in the player what happens?

How about when you put a Blu-ray disc in the player?

The interface of the player is the same for both types of media, but the behavior is different.  Internally, there is a different implementation of the action of playing a disc depending on what the type is.

How about a vending machine?

Have you ever put change into a vending machine? vending

You probably put coins of various denominations or types in the same slot in the machine, but the behavior of the machine was different depending on the type.

If you put a quarter in the machine it registers 25 cents.  If you put in a dime it registers 10 cents.

And that is it, you now understand the actual concept of polymorphism.

Want to make sure you don’t forget it?  Try coming up with a few of your own real world analogies or examples of polymorphism.

Bringing it back to code

In code polymorphism can be exhibited in many different ways.

Most developers are familiar with runtime polymorphism that is common in many OO languages like C#, Java and C++, but many other kinds of polymorphism exist.

Consider method overloading.

If I create two methods with the same name, but they only differ in type, I have polymorphic behavior.

The interface for calling the method will be the same, but the type will determine which method actually gets called.

Add(int a, int b)
Add(decimal a, decimal b)

You might be shaking your head “no” thinking that this is not polymorphism, but give me the benefit of the doubt for a moment.

The most common argument against this example as polymorphism is that when you write this code the method that is going to be called is known at compile time.

While this is indeed true for statically typed and compiled languages, it is not true for all languages.

Consider Add being a message instead of a method.

What I mean by this is that if you consider that the actual determination of the method that is called in this situation could be differed until runtime, we would have a very similar situation to the common shape example.  (Late binding)

In many languages this is what happens.  In Objective-C or Smalltalk for example, messages are actually passed between objects and the receiver of the message determines what to do at runtime.

The point here is that polymorphism can be done at compile time or during execution, it doesn’t really matter.

Other polymorphic examples in code

Since the intent of this post is not to classify and explain each type of polymorphism that exists in code, but rather to provide a simplified understanding of the general concept, I won’t go into a detailed explanation of all the kinds of polymorphism we see in code today.  Instead I’ll give you a list of some common examples that you may not have realized were actually polymorphic.

  • Operator overloading (similar to method overloading.)
  • Generics and template programming. (Here you are reusing source code, but actual machine code executed by the computer is different for different types.)
  • Preprocessing (macros in C and C++)
  • Type conversions

Why understanding polymorphism is important

I may be wrong, but I predict that more and more development will move away from traditional OO as we tend to find other ways of modularizing code that is not so rooted in the concept of class hierarchies.

Part of making the transition requires understanding polymorphism as a general purpose and useful computer science concept rather than a very situational OO technique.

Regardless, I think you’ll agree that is it nice to be able to describe polymorphism itself rather than having to cite the commonly overused example of shapes.

Published at DZone with permission of {{ articles[0].authors[0].realName }}, DZone MVB. (source)

Opinions expressed by DZone contributors are their own.

{{ tag }}, {{tag}},

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}
{{ parent.authors[0].realName || parent.author}}

{{ parent.authors[0].tagline || parent.tagline }}

{{ parent.views }} ViewsClicks