The Power of Graphs and How to Use Them
A presentation from GraphConnect San Francisco on the best use cases and the future of graph databases
Join the DZone community and get the full member experience.Join For Free
editor’s note: last october at graphconnect san francisco , jim webber – chief scientist at neo technology – delivered this very sciency closing keynote on the graph database landscape.
for more videos from graphconnect sf and to register for graphconnect europe, check out graphconnect.com .
we’re going to talk about the future of where we’re going in the graph space. it’s going mainstream, and it’s becoming a really popular space to be.
this is partially closing keynote and partially my own catharsis. i can’t really afford therapy: you’re it.
i’ve got three things i want to grumble about to you and see if we can set a reasonable course through the interesting and emerging graph space.
first, we’re going to look at what it means to have graphs in a single model in a neo4j-style database or do you do the multi-model and polyglot-style database?
next, we’re going to look at what it means to do online transaction processing versus analytics and how that plays out in the graph space.
finally, we’re going to talk about that native versus non-native stuff. i think that’s very, very interesting computer science going on there. it also happens to be the stuff that my team is working on over in the old world.
single vs. multi-model databases
let’s start with the single, multi-model stuff.
anyone ever heard of unix? there’s a thing that underpins unix; there’s a philosophy that i think is quite appealing to those of us in engineering and development.
it is: do one thing and do it well.
unix tools are all optimized to do one thing and do it well and then from that fundamental foundation, you can then build sophisticated workloads by training those tools together.
i rather like the idea of doing one thing well.
neo4j does graphs; that’s what we do. we don’t try to be all singing, all dancing, all things to all people. we do graphs and we do it well. that’s what we’ve strived to do over the years, and it’s what we’re going to strive to do in the coming years.
but that’s not necessarily the only point of view.
you may say that some things that you have aren’t graphy. maybe you think you need other data kits to deal with those. maybe you want a mixture of technologies or need a single technology that can handle graphy and non-graphy data. maybe.
let’s see what happens when we play this forward.
every data model is a graph model
in terms of whether to go with a single model or multi-model database technology, i would say this: graphs are such a flexible model, they’re a model that can tolerate – or indeed, embrace – many, many different domains and data. they’re probably all you need.
you’ve got a document problem ? sure, but documents are just trees and trees are just simple graphs.
you’ve got shopping lists or maybe you’ve got a shopping basket that you want to turn into retail trends. lists are graphs.
train lines are graphs. your lovely road network with this bridge over here is part of a large graph.
spreadsheets are graphs . now that is really, really weird because spreadsheets are the archetypal square. it’s a grid, not a graph, webber.
except, no: even spreadsheets are graphs. of course, we have relationships around neighboring cells in a spreadsheet, but where spreadsheets get very graphy is when you start using functions to refer to other cells in that spreadsheet.
in fact, felienne hermans , who is a professor from the netherlands, will actually take your spreadsheet and will decompose it into a neo4j graph and tell you where all the bugs are. thus, even boring, square spreadsheets are graphs.
it is my thesis that practically anything can be modeled conveniently as a graph – and queried efficiently in neo4j.
graphs aren’t multi-model – they’re super -model
i’ve seen a lot of vendor hype going on out there about multi-model databases.
i’ve been thinking though: given that the most common data structures i work with are effectively a one-or-another variance of graphs, why would i get excited about tools that claim to do multi-model? graphs already do multi-model.
even things you might traditionally associate with more specialized databases – things like range queries and so on – don’t require a multi-model approach. with neo4j 2.3 you do range queries, but that’s not the be-all and end-all of it because these range queries tend to be jumping off points for even more sophisticated graph analysis.
the graph, it turns out, is the supermodel.
anything you need to do in a document, anything you need to do in a list or a tree, anything you need to do in a spreadsheet – you can already do in a graph.
so why would you bother going through the hassle of artificially separating out these domains and fighting some kind of multi-model problem when actually – in the real world – once you strip out all of the vendor bullshit most of the data you’re working with is conveniently a graph?
why would you bother going to all of that trouble when it’s already performant and lovely for you?
end of rant one.
online and analytics
there are typically two kinds of workloads that we throw at our data processing infrastructure:
- things that we want to do rapidly in response to some user activity, such as in between a web request and a web response.
- things where we want to gain insight into a larger set of data, perhaps something that we’re prepared to run at many seconds, minutes, hours or days of latency.
in the past, when you did this kind of stuff…
…you usually end up building a cube per query.
so when it comes to analysis, what you end up doing in the “ye olde world” of grandad’s technology is doing some crafty etl jobs. you build a cube per query, and you try and predict upfront which dimensions you’re going to need to roll into your cube to aggregate, and it’s all a little bit guesswork and pseudoscience. eventually, you’ve got a big expensive warehouse that you’re terribly nervous of.
the correct response to seeing one of those olap cubes is:
definitely the correct response. why? what do you do with those things?
we think about olap as cubes because that’s the way the technology has driven us. we think about olap and analytics being kind of intermingled but they’re not.
the goal of analytics is to gain insight – not to build cubes. the goal is to gain insight into data so that we do something actionable on it, whether we are gaining insight into data reactively or whether we’re looking at data predictively, such as in predictive analytics.
why graphs are perfect for analytics
these analytic workloads are very straightforward in a graph.
all your dimensions already exist in a graph. graphs are naturally n -dimensional structures.
when you do analytics on neo4j, your dimensions are given to you by relationship types and labels and so forth, and you don’t have some weird-ass cube to deal with.
it turns out graph theory is tremendously good for both reactive and predictive analytics.
despite emil having written a really nice book about graph databases , the best book on the planet, bar none, about graphs is networks, crowds and markets by easley and kleinberg. in that book, they describe how to tame graphs to do sophisticated analyses using the almost 300 years of graph theory we have at our disposal to gain amazing insight.
a crash course in graph theory & world war i
i’m going to give you a crash course in graph theory.
graphs love triangles. it’s really weird. graph theoreticians have a really posh name for this: “triadic closure.”
a stable triadic closure is where i’ve got two sides of the triangle and the graph wants to close the third side. in some cases, it’s three positive sentiments. another way of creating a stable triangle is to have two negative sentiments and one positive.
knowing this, you can now start to do some really sophisticated predictive graph analyses, simply by iterating over your graph, making these triangles.
believe me? let me give you a historical example where we already know the outcome. this is the great houses of europe around 1850.
in this graph, the black relationships tell us allies and the red relationships tell us enemies. you can see that europe was already a fairly fractious place 150-odd years ago.
now, graph theory doesn’t know anything about the great houses of europe. it doesn’t know anything about history. all graph theory knows is that it wants to make stable triangles.
so let’s roll this forward and iterate over this. immediately, we bring italy in.
it forms a stable triangle with austria and germany. see that? they’re all friends.
then what happens over here is france and russia dissolve their longstanding aggravation and actually they do this:
this new relationship between russia and france is weird. the graph hates this. anyway, after the russians and the french gang up, this happened:
this is awkward for two reasons.
first, there’s a famous historical document that described this new alliance, it’s called the entente cordiale . my french isn’t actually very good but i do know the translation of entente cordiale . it’s like, “england, please stop beating us up.” that’s what it loosely translates as – awkward.
secondly and seriously, the graph does not like this unstable triangle between russia, france and the uk.
the graph wants to eliminate that edge between russia and the uk. so, finally russia and the uk become friends. you iterate over the graph, just making these low-energy, stable triadic closures and you end up here:
this is a low-energy graph. there is nothing else that the graph wants to change. all of these are stable triadic closures.
not knowing anything about world war i, graph theory just predicted the starting lineup for world war i.
but you can also apply this to your domain.
you can apply this to figuring out what people are going to buy next. you can apply this to figuring out insurance underwriting and risk. you can apply this to a whole range of things.
graph theory isn’t perfect, but it’s a decent way of having a future view of how your world will be.
the point here is there’s nothing special about this. once your data is in a graph, this is just more graph analytics. no scary olap cube, no etl magic required.
when you’re doing this in graphs, there’s no cubes to build. you just use the graph that you already have running on your servers. you’ve just got your graph and the algorithm that you want to run against it.
the graph that you’re using to serve oltp queries to your web property, is the same graph that you’re running these deep analytic queries on.
the proper role of offline analytics
however, at some point you do run up to a limit. when you’re doing some really large jobs you want to be able to bring data out of the database and compute across it, which is great.
what emil talked about earlier with cypher running on spark means that you’ve got a really unified api to the whole graph universe. your compute engine speaks cypher, your database speaks cypher.
this enables a kind of virtuous cycle where you project some sub-graph out of the database via query, you crunch it through your processing layer and then you drop the results of that back into the graph. it’s a continuous cycle of enrichment.
so, in this case, what you’ve got is something a bit like this (below) where your app can speak cypher to spark or your app can speak cypher to neo4j.
you’ve got this unified view of your underlying graph be it compute or be it storage and query. but in the future, i think there is an exciting opportunity to do this:
i think there is an opportunity for some cypher query planner or some other middleware to intervene on your behalf and try to figure out when it should dispatch to the database (like neo4j) for a database query and when it should dispatch to a compute engine (like spark) to crunch over those graphs in parallel.
i’m not saying that the neo4j team will build this tomorrow, but to me, this is an obvious next step.
native vs. non-native graphs
my final gripe. native versus non-native.
my colleague, max de marzi, just wrote a stunningly good blog post about performance , and particularly, he took umbrage with vendor benchmarks. he said it’s funny that when a vendor writes a benchmark their stuff always comes out on top. that’s the tl;dr. it’s true.
that kind of got me thinking.
we know how fast neo4j is. ian robinson and his team are always beating neo4j to death and seeing how fast it can go. but fundamentally what struck me was max happened to take umbrage with a database that was not graph native, but that claimed to be a zillion times faster at everything.
this got me thinking about the differences between native and non-native databases. even before we get into fancy things around like cluster architecture and so on, there are some fundamental differences between a native and a non-native graph database and it comes down to cost of access.
native vs. non-native use of indexes
in a native graph, like neo4j, most of the operations that you are doing in order to traverse the graph are cheap. they are in constant time. all neo4j does is chase pointers around a data structure in order to traverse the graph and that’s super cheap to do.
on my laptop i can kind of spin up some vendor bullshit benchmark and i can show that my laptop will do 14 million traversals a second which is some laptop, a nice laptop but it’s just a laptop.
if i were not a native graph database yet i wanted to fake a graph api to the user, what do i actually do when it comes to fast traversals? well, typically i’m going to use indexes to fake this. the thing is with indexes, they’re not cheap.
back in 2010, marko rodriguez and peter neubauer coined this term, index-free adjacency.
it means that when you’re at a node, and you’re looking to traverse to a neighboring node there’s no global index lookup there. the node itself, because of its connectivity, kind of acts like a mini-index and because of that, we can do these super cheap traversals.
in neo4j, we do have some index lookups but they tend to be few and far between. index lookups tend to be for finding a starting point in the graph, and once you’ve found a starting point, it executes the constant-cost operation of index-free adjacency until you reach your informational goals.
but if you’re faking it, you’re doing all of this stuff in indexes.
indexes aren’t bad things, but you have to understand that the algorithmic cost of accessing an index is log, and if you have to do an index lookup for every hop through your graph that means overall you’ve got n (log n) complexity.
that has nothing to do with implementation details, however. it’s a choice that you’ve made when you built non-native kit.
you’re going to fake the relationships by doing index lookups. and by the way, if you want to traverse both ways, you’re going to need two sets of indexes and that has an impact on your write performance as well.
if you’ve got an n (log n) cost for your graph traversals, you will lose out to any native database which can do this in one step period. computer science wins out.
by computer science, i decree that native technology wins out over non-native.
what about distributed systems?
but what about when we’re going to cluster these things? because eventually we want to put this stuff into production, and we want to do some clustering.
there’s a really interesting thing going on in distributed systems. there is a significant tension between having a system which is available and having a system which is reliable .
available means that even in the presence of faults, the system will be able to accept reads and writes. it doesn’t mean that you’ll get the correct answer from the system. it just means that it will be available for reads and writes.
reliable is the property of a system where even in the presence of faults you give the correct answer and you can’t have both.
so back in the day, fisher, lynch and patterson wrote their famous paper. it’s now called flp, which demonstrated that you cannot have both. you can have availability or you can have reliability.
there are interesting design trade-offs when you’re clustering. a lot of the other nosql databases are definitely towards the availability end of the spectrum.
but the thing is, in the graph world things are a bit more difficult. we need consistency in the graph world. if you’re using graphs and you allow for bidirectional traversal, you actually need to have consistency of two.
consistency of two is hard. it’s not like just twice as hard as consistency of one. it’s like a million harder. so you’ve got to be able to keep two sets of replicas in sync and if you don’t, bad things happen.
you might say, “there are already databases out there that are doing both availability and reliability and surely they must work.” no, i’m afraid they don’t. they will lose your data.
the honest ones actually write it on their webpage how they will corrupt your data under normal operations, and i salute them for being honest but you can’t use them safely.
introducing neo4j core-edge
which brings us to what is neo4j doing about the trade-off of reliability and availability? fair question, good friends, fair question.
ian robinson and i wrote a book with a guy called savas parastatidis called rest in practice . it was a pretty interesting topic to be involved with, and we got to think about why the web scales.
so why does the web scale the way it does? the web scales because it federates load through the network.
it’s not about having 10,000 machines serving one million concurrent users because that’s not web scale. that’s exactly 100 scale.
so, how do we take our influence of the web philosophy and direct that towards building better ways of making native graph clusters?
what emil didn’t announce earlier is that from neo4j 3.0 onwards, we have a web-like clustering protocol where we have an architecture that the engineers call core-edge, where we have a relatively small number of machines in the core of your network.
these core machines can be geographically distributed or co-located, and they’re like the data citadel. they keep it safe. they don’t serve stale data – they are reliable.
in fact, these core machines are coordinated by a protocol called raft.
in that call, we’ve got a model whereby when you commit a transaction to neo4j, that transaction is guaranteed safe and replicated. when you’ve got an act back that transaction cannot be destroyed other than by an act of god wiping out your whole data center and all of your backups.
however, federating that load outwards like the web does, we have a bunch of machines that we call edge machines because they live at the edge of the network. these machines are caches, but they’re not just like dumb, memcache-style caches.
these are fully functioning neo4j instances, which means you could run graph queries on them and you can put them locally or you can spread them around the world or you can do whatever you like.
we’ve got this notion now where we can federate load out through the network. the keystone of this is raft. raft means your data is safe. those core machines are provably safe. the raft guys have done a brilliant job with that protocol. i love raft.
when you look at raft, the protocol itself is humane. that’s an important thing. the understandability of raft means there are very few places for bugs to hide and bugs in distributed systems are difficult.
once you commit, your data is now safe. if you can’t trust a database to keep your data safe, you have no business putting your data in that database.
you don’t poke your fingers in unknown holes and then not expect to have them come back bitten by some critter. databases are the same. you can trust us with your data. your data is important – your businesses run on it. the very least we can do is not corrupt it and not lose it.
these edge machines though, they seem to be eventually consistent with respect to the core and didn’t i just say that was a bad thing? well, that’s not the case.
in neo4j 3.0, we will also give you the ability to read your own writes, which is a really nice way of reasoning about your data. so suddenly you can write things into the core, and you can read your writes from an edge at some point.
the way that we plan to do that is using the neo4j bolt protocol , which happens to be a fork of messagepack that’s just twice as fast.
looking for inspiration from the web, we’re going to support something like http’s “etags” so that when you do a read, you’ll be able to set to x transaction level or greater. and if the transaction hasn’t percolated over to you yet, you just get paused for a moment until that happens.
this gives you the ability to do things like write core and then read edge, which is nice because the ability to read your own writes makes reasoning about your use of the database really straightforward.
that means you can write stuff and then you’ll see it. i can’t tell you how bloody hard that is in distributed systems. using etags we can do it on the web, and using the equivalent of etags we can do it inside the database cluster.
but neo4j can actually go further. you can even choose linearizability in this cluster, which is practically the holy grail for databases. with neo4j 3.0, you choose your level of consistency when you query the database.
neo4j 3.0 allows you to read from any machine, including from any edge. it also allows you to read your own writes (ryow) from core or edge. you can even read the leader.
there’s a role in raft called “leader” which tends to be the most up-to-date instance. or indeed, you can even choose to perform linearizability by getting you to query a quorum of the core machines.
yes, once you do that, it’s relatively expensive, but in those cases where you absolutely, positively need linearizability, you’ve got it. that’s pretty sweet.
and we’re going to raft all the things.
we are not going to give you a choice on the write channel, however.
on the read channel you’re going to get to choose your consistency. most of the time you’ll probably pick conservative consistency levels.
on the write channel, you don’t get to pick. we are going to raft all the things on the write channel because raft keeps your data safe.
once your data is committed around the cluster, it is safe and in neo4j it also means durable on disk. so if we suffer partial failures, we just bounce back from that. you guys don’t get to turn that off. that’s our prerogative.
so where are we then, ladies and gentlemen?
graphs are the supermodel. they already do all of those things you’re thinking about doing. if you think about doing documents, it’s a graph. if you’re thinking about doing trees, it’s a graph. if you’re thinking about doing hierarchies, it’s a graph. if you’re thinking about doing this, it’s a graph.
it’s like admiral ackbar – it’s a graph!
i don’t care what gartner says. graphs are the big game and multi-model is a distraction.
if you want to do polyglot persistence, great. there are databases that do one thing and one thing well. glue them together. it’s the 21st century.
cypher on spark is also game changing. you’ve now got a uniform graph api to the two fundamental pillars of the data ecosystem: to neo4j, your storage and query graph; and to spark, your graph compute.
that’s going to be such an exciting place to live in the future graph space.
finally, once you’ve got a safe raft, you can start to build amazing super-powered clusters.
neo4j itself is going to continue to evolve, and we are going to continue to evolve faster. you will have to come to graphconnect europe , and i’ll tell you all about it then.
Opinions expressed by DZone contributors are their own.