# Algorithm of the Week: Shortest Path with Djikstra

# Algorithm of the Week: Shortest Path with Djikstra

Join the DZone community and get the full member experience.

Join For FreeHortonworks Sandbox for HDP and HDF is your chance to get started on learning, developing, testing and trying out new features. Each download comes preconfigured with interactive tutorials, sample data and developments from the Apache community.

Tonight I decided my study time would be to sit down and implement Djikstra’s algorithm in Python to help me understand it. When coding up a solution to a problem like this I tend to try to not look at other solutions so I won’t be tempted to cheat and skip steps in the learning process. Hopefully that means a more thorough explanation as well as code that makes more sense to someone just learning.

So first what is a “shortest path” algorithm and why Djikstra’s algorithm specifically?

Shortest path algorithms can be most easily related to a class of problem every online driving direction service (like MapQuest or Google Maps) has to solve. These problems work with a concept known as a directed weighted graph which is a collection of points with connections to other points as well as a cost for each connection. Here, it might help to visualize it:

Imagine that’s a simplified map of a city. Each of the labeled circles are street intersections. The lines connecting them are streets and the numbers are how long it takes to get to each intersection from the previous one.

Now, as quickly as you can, tell me what route through the city will take the least amount of time?

Let’s use that as the problem we want to solve.

## First steps

I like to work backwards and start with what do I expect to do to invoke the algorithm. I’ll need something like this:

solution = find_shortest_path(graph, "A", "H")

Also we’ll need to describe our graph in the form of a data structure. I love JSON so let’s use the next closest thing in Python: Dictionaries. This is the graph above translated:

graph = { "A": { "B": 1, "C": 2 }, "B": { "F": 6, "C": 1, "E": 3 }, "C": { "E": 4, "D": 2 }, "D": { "E": 1, "G": 4 }, "E": { "F": 2, "G": 2 }, "F": { "G": 1, "H": 11 }, "G": { "H": 14 } }

This can be read as: “Traveling from intersection A to intersection B takes 1 minute.” “Traveling from intersection A to intersection C takes 2 minutes.” “Traveling from intersection B to intersection F takes 6 minutes.” etc.

If you want to skip directly to the main event, my finished code is here :https://gist.github.com/jcbozonier/5424673

The main portion of the code that also (conveniently) explains the high level process is this:

def find_shortest_path(graph, start_node, goal_node): paths = [{'path': [start_node], 'cost': 0}] while not something_reaches_goal(paths, goal_node): path = get_lowest_cost_path(paths) new_paths = get_nodes_and_costs(graph, path) paths.extend(new_paths) paths.remove(path) return get_path_that_reaches_goal(paths, goal_node)

It starts by creating a default path on the starting node.

The next step is to check if we have a path that connects to the goal node that is also our cheapest option. That’s only possible at this point if your start and goal node are the same. Next we get the lowest cost path from our list of paths and then search that node to get us all of the paths it makes up along with their total costs. We add those paths to our pre-existing list of paths and then remove the path we just explored.

Repeat that paragraph over and over until you’ve found your solution.

## Digging in a bit deeper

What I had to realize is the core of this program is this big list of potential paths to look at. The more “official” term is “edges” let’s not get hung up on that though. So yeah we have this list of potential paths that have the total costs associated with them and we’re basically just saying “Hey! This path takes less time than the others. I wonder if it’ll still be that way when we grab the next intersections it’s connected to.”

So we’re exploring the different intersections and simultaneously only those intersections that will give us the lowest *total* time possible.

Also notice that once we’ve explored a path and discovered all of its connected paths, we remove it from our list. We already know it isn’t the optimal path because we check on that before we start each iteration in our loop. If it was the best path we would’ve stopped already.

There are more details in the specific methods that make up this overall view of the algorithm but they’re more implementation concerns to me. Once you understand what is really making the algorithm tick you can code those whatever way you like.

I expect there are bugs in it as I haven’t tested it with cyclical graphs yet. Either way I learned a bunch!

## Our final answer

Oh yeah! So what’s the solution to our problem? This is the output of the function:

{'path': ['A', 'B', 'E', 'F', 'H'], 'cost': 17}

So A -> B -> E -> F -> H is the quickest route with a time of 17 minutes.

Again, the finished code is here: https://gist.github.com/jcbozonier/5424673

I also instrumented my code to explain each step of the algorithm here (settle in, it’s gonna be a long one):https://gist.github.com/jcbozonier/5424843

Note: All views expressed here are my own and not those of my employer.

Hortonworks Community Connection (HCC) is an online collaboration destination for developers, DevOps, customers and partners to get answers to questions, collaborate on technical articles and share code examples from GitHub. Join the discussion.

Published at DZone with permission of Justin Bozonier , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

## {{ parent.title || parent.header.title}}

## {{ parent.tldr }}

## {{ parent.linkDescription }}

{{ parent.urlSource.name }}