Graph Compute With Neo4j: Built-in Algorithms, Spark, & Extensions
Last October at GraphConnect San Francisco, Ryan Boyd – Developer Relations at Neo Technology – delivered this presentation on how to perform various graph compute functions within the Neo4j ecosystem.
Join the DZone community and get the full member experience.Join For Free
editor's note: last october at graphconnect san francisco , ryan boyd – developer relations at neo technology – delivered this presentation on how to perform various graph compute functions within the neo4j ecosystem.
for more videos from graphconnect sf and to register for graphconnect europe, check out graphconnect.com . ..
we’re going to go over graph compute and the various ways you can use this with neo4j . the following is mostly a survey of the different available technologies along with some demos of how those technologies work.
neo4j is optimized for online transaction processing (oltp) and is intended to be used as your primary database. while it wasn’t built specifically with the intention of being used for graph compute or analytics, a lot of customers and open source users are using neo4j for those purposes.
two types of graph compute
there are a number of different ways to do graph compute, the first of which is through subgraphs. you do this by traversing the graph from an anchor node through which you only touch a certain subset of the graph. neo4j is optimized for subgraph traversals and includes algorithms to perform that function.
you can also perform a global traversal using algorithms such as pagerank , which touches every node in the database. neo4j isn’t optimized for this type of graph compute, but there are some techniques you can use that allow you to perform this type of search.
subgraph queries with cypher
mapping shortest paths in twitter
in this first demo, i’m going to map my twitter network; you can map your own network through the same tool here . i plug my dataset into the graph network application by logging in with my twitter handle. it then uses oauth to load your tweets, other users’ tweets, etc. into a neo4j docker instance for you to run.
the first queries we’re going to run are
i have a number of neo4j servers running in the background, and to make the queries a bit more interesting, i’ve uploaded a bunch of other users in addition to my and other neighboring tweets.
with the below query, i’m requesting that neo4j return
. if you only do
, it will only return only one of the shortest paths rather than all of them:
below is the shortest path between me and a particular tweet. i’m represented by the node at the top right ( @ryguyrg ), while the blue node at the bottom left is the tweet. this tweet was actually a retweet by @bestnosql (bottom right), which mentioned both @maxdemarzi (top left) and @bestnosql.
the shortest paths through which i can get to the tweet are either through my
relationship with @bestnosql, or through my
relationship with @maxdemarzi.
finding the relationship between two users is a bit more interesting. for the next example, i’m going to try and find the shortest path between a randomly-picked twitter user—kevin—and me.
those are the two different paths through which i can get to @kevin. if i had chosen to do
, it would have shown only one of those relationships.
these both represent basic subgraph queries because we started at one node and traversed a certain segment of the graph to arrive at another node or set of nodes.
mapping shortest paths on legis-graph
i loaded the data into my neo4j instance using a tool called load cypher script —also developed by will—and allows you to load multi-line cypher scripts into any neo4j instance. using this new dataset, we’ll go through another shortest path query between legislators.
the dataset includes information about which legislators sponsored or co-sponsored certain bills. we are going to run a query to find
between congresswoman carolyn maloney and another legislator with the last name cochran, up to three degrees of separation:
the results show us that maloney voted on two bills that were sponsored by cochran:
subgraph queries with the dijkstra algorithm
shortest paths between airports
another example comes from a database that contains all the airports and the distances between them. to find the shortest path between the san francisco and missoula airports, i wrote the following query:
you can’t fly directly from san francisco to missoula, so you have to connect through one other airport. but finding the shortest path doesn’t give you any indication of how much time the trip will take or how many miles it is. to find both of those answers you need to use the dijkstra algorithm.
in the below example, we have two airports, each of which has a distance property on the
relationship, which we will use to calculate the
based on distance. the dijkstra algorithm is not currently built into cypher, but you can call it directly through a rest api.
below is the rest api call. the number 197909 towards the end of the url is the node identifier, and we are trying to find the shortest path between this node and another node, which will be specified shortly. the rest api also contains our content type, the username and password and the machine:
below are the contents of dijkstra.json. i have the node that i’m going to, i specified the
as distance, the
as travels and then perform an outbound traversal:
the rest api result, which can be parsed, looks like the following:
the first thing the results show are the id numbers of all the different relationships and nodes. this is much easier to work with in the java library than when you’re trying to bounce back and forth to the browser.
if i run the query in the neo4j browser, it looks like the following:
that will return the following graph, which shows that the shortest paths between san francisco and missoula airports is through rno, or reno, nevada.
i also hand-parsed the earlier results from the dijkstra algorithm, which tells us that flying through boise is the shortest trip:
you can also perform global algorithms through neo4j by using extensions such as a graph processing —built by max de marzi and michael hunger—and a graphaware noderank algorithm that performs pagerank.
global algorithms can also be performed using spark integration through mazerunner —built by kenny bastani—which allows you to run both spark and graphx jobs on top of your neo4j data. mazerunner is implemented in part as an extension and is the point at which all these tools fit together.
there are some concerns about using these extensions to run global algorithms.
the first issue is that most of these are batch jobs because they are too computationally heavy, so they won’t have the real-time results that you’ve come to expect from neo4j. however, we can write the results from these algorithms back to our neo4j datastore to allow for real-time queries. so, you compute your results off of the data, put those results back into neo4j, and then use the graph to make real-time decisions.
there is also a computation performance challenge here: you don’t want to interrupt the real-time processing of your database. there are a couple different ways to prevent this.
some people create a node in their cluster specifically for analytics which they sync with the original node, which allows them to run analytics queries on that node either through extensions or over to something like mazerunner. you can also use an extension like graphaware, which only runs when you don’t have many transactions occurring in your database.
although there are certain limitations, there is a great benefit to using these integrations. in particular, spark has a great community of people building different algorithms with graphx on top of graphs that you can use.
let’s go through a quick tour of mazerunner, which as i mentioned was developed by kenny bastani but was officially turned over to neo4j to manage moving forward. it’s an open application on github, and we are always looking for feedback.
global algorithms with mazerunner
mazerunner is based off spark—which is all about in-memory, large-scale data processing—and uses graphx as the api for graph processing. below is its general architecture:
to run this, you perform a query in neo4j and call the spark extension, which takes data out of neo4j through mazerunner and copies the data into hdfs , spark, and mazerunner. it then reads that data out of hdfs, does memory processing, writes the results back to hdfs which are then picked up by neo4j through the extension. it’s essentially a combination of a few different messaging queues.
we are going to use what i call the vanilla installation of mazerunner using docker compose which combines a couple different docker images: an hdfs node, the mazerunner image, and a neo4j database:
you can also see the ports that the neo4j database is mapped to.
points to the actual volume where the neo4j data is stored, which isn’t in the container because the container doesn’t persist. then we specify
from neo4j to mazerunner and hdfs which allows network communication to occur between the two.
next, i type in
, which starts each of the containers—hdfs, mazerunner, and neo4j—and shows us the combined console output. at the end, it starts up neo4j.
in neo4j, we start on the same port and ip and enter our query. we want to find the legislators that have a strong pagerank, match those who are in the senate and then return their names:
the query won’t return any rows right now because we don’t currently have any pageranks.
next, i go to mazerunner to get a url issuing an
from within the browser, which is an algorithm to run pagerank and includes the
relationship to indicate sponsor and co-sponsor relationships:
the call was made, and you can see at the top that it exported the records 20000, 40000, and 60000 out of neo4j:
you can also see next to
mazerunner export status
that 100 percent of the requested data has been exported. next, mazerunner and spark output their logs, which includes a lot of log data.
it’s important to note that pagerank isn’t computationally heavy. for those algorithms that are computationally heavy, it’s better to use a tool like spark because you can distribute the processing across many different nodes, combine the results in a distributed file system—such as hdfs —and then load the data back into neo4j. but it’s much faster to do algorithms like pagerank directly in neo4j.
below we can see that our job was completed, and that the data was written back to hdfs and loaded back into neo4j.
if we run the
statement again, all the senators with pageranks are returned in order by most viewed:
we can see that orrin hatch is the senator with the highest pagerank.
if we remove the senate restriction and return the results for all legislators, you can see that the pageranks for members of the house of representatives are much higher:
now we’re going to shut down spark, which is really easy to do:
global algorithms with neo4j graph analysis
i’m then going to start up the instance with the neo4j graph analysis extension:
this extension has the same mechanism for calling the url as mazerunner, but in this case i’m going to do it as a
. in this case, the results come up much, much faster than they did with mazerunner. this points to the fact that sometimes there are advantages to doing searches like this directly in neo4j rather than through spark.
however, when i tried to do a betweenness centrality algorithm earlier today on spark, for which i gave only one single cpu of one machine, it took several hours. so, if you have more computationally heavy algorithms, using spark makes more sense because you can parallelize out the query to many machines.
you can also run a closeness centrality algorithm , which returns how close a node is to all other nodes in the graph and can be easily performed by these extensions. so, there are a lot of different algorithms you can use within mazerunner and the graph processing extension has a similar set of algorithms. if you use spark, you get the added resource of the spark community, but if you’re using neo4j directly it may be faster for smaller, less computationally-intensive datasets.
inspired by ryan’s talk? register for graphconnect europe on april 26, 2016 at for more industry-leading presentations and workshops on the evolving world of graph database technology.
Published at DZone with permission of Ryan Boyd, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.