Neo4j Considerations in Orchestration Environments
Using Neo4j with increasingly popular container orchestration engines requires a little addtional explaining.
Join the DZone community and get the full member experience.Join For Free
As more workloads are being run in Docker containers, using orchestration environments like Kubernetes, Mesos, and Docker Swarm is also gaining in popularity.
Neo4j already provides public Docker containers, and has published a commercial application to Google’s GKE marketplace. We also have a public Helm chart that customers use to deploy the graph database in Kubernetes.
Databases in Container Orchestrators
Orchestration environments, though, introduce some special architectural challenges for databases, and Neo4j in particular. This article aims to cover some of those and the extra considerations you might want to keep in mind. Most of the examples here will talk about Kubernetes specifically, but architecturally, the considerations are the same whether it’s Kubernetes or any other orchestration manager.
We’ll take a look at the architecture of a Neo4j deployment, both from “inside of Kubernetes” and “outside” perspectives, and talk about how that impacts querying.
Looking at Neo4j Inside of Kubernetes
The diagram below shows how the internal deploy of a typical Neo4j cluster in Kubernetes works. Each pod is placed in a StatefulSet depending on whether it is a core or read replica node. Each pod has a persistent volume claim which maps to an underlying disk which stores the data.
Neo4j on Google Kubernetes Engine (GKE) architecture diagram
Looking at Neo4j From Outside of Kubernetes
Within Kubernetes, everything is straightforward; the pods can be connected to directly via internal DNS. The “app2” in the box below has no trouble dealing with the cluster because the auto-assigned DNS routes for it.
App1 talking to Neo4j inside of Kubernetes
The outside app1, though, has a different set of issues. The Kubernetes cluster manager has some public ingress IP addresses, and we need a way of routing traffic into the right cluster node, and this is where things become a bit tricky.
This section assumes some familiarity with how bolt+routing works in Neo4j. If you need a brush up on any of the ideas of how Neo4j query routing works, please have a look at Querying Neo4j Clusters. In that article, we described how the “public advertised address” of a node interacts with the Routing Table shown in the diagram above.
In Kubernetes setups, there is a specific problem: internal DNS in the table above cannot be routed by
app1 outside of the cluster. So even if there is an IP ingress, attempts to use
bolt+routing Neo4j drivers will fail. App1 will get a routing table with un-routable information, and will fail to connect.
- Determine public IP addresses or DNS that you can use for your pods, and set that in your Docker configuration — pods should advertise proper external addresses
- Set up a NodePort service to route traffic from the outside to a specific pod, and set the advertised address of the pod to be the ingress IP. Make sure to use “port spreading.” That is, if bolt is usually on 7687, then pod 1 would be on port 7687, pod 2 on port 7688, pod 3 on port 7689, and so on, so that they can all use the same address without conflicting.
- Set up a service that points only to the leader of the cluster, and then use only a bolt driver, and not a bolt+routing driver.
Keeping in mind that cluster topology can change and leader re-elections can happen, #3 is usually not a good choice. Option #1 or option #2 tend to be superior — but require extra configuration after you deploy Neo4j. Those configuration steps tend to fall into these categories:
- Register DNS for each node
- Configure Neo4j pods to advertise that DNS
- Configure port spreading ingress services as needed
Now that we’ve covered Neo4j networking and bolt+routing specifically, let’s get into other things that make databases different and what to look out for.
Orchestration Managers and Databases
As orchestration managers were born and grew up, they typically handled workloads that had a lot of stateless microservices. Developers would create small containers that held a single service. Typically that service did not store any data (if it did, it talked to a database deployed outside). These services, because they were stateless, could be deployed in fleets, and willy nilly killed and restarted. Typical deployments might have many replica containers, fronted by a load balancer. Clients would talk to the load balancer, and end up getting forwarded to any of the microservice instances, it didn’t matter which.
Orchestration manager features play directly to this sweet spot: Kubernetes with the concepts of
ReplicaSets(a set of replicas of a given container) and
LoadBalancers(allowing access to any replica through a single network ingress).
Databases are Different
Databases (including Neo4j) are different in three key ways here!
They’re Stateful: They must store data; that’s their whole reason for being. So in orchestration managers, we need to think about concepts like disks, persistent volume claims, and so on.
Database pods use Persistent Volume Claims to keep long-term storage of state
Their Pods are Not Interchangeable — They either host different services, or have different capabilities. In the Neo4j architecture, for example, there are leaders and followers, where leaders need to process the writes.
Different nodes play different roles in clustered database architectures
If you need to talk to a particular cluster member, though, this limits the usefulness of abstractions like load balancers, because they don’t know or care about any differences. In several databases, this necessitates the whole idea of a “routing driver” that handles this logic at the application or driver level. Which in turn complicates the orchestration networking configuration.
They scale differently — with microservices, suppose I have a function
hello which returns “hello world.” I can scale this service up and down very quickly. Launching the container takes seconds, and there’s no stored data to synchronize. With databases, when you “scale up” you may need to copy most or all of the database contents to the new node before it can fully participate in the cluster. Doing that “scale up” may also place extra load on the other cluster members while they replicate data and check in with their new peer. When you scale down, your cluster finds that a member has gone missing, and it will recover but each of the other instances needs to update its member list.
None of that is happening with your typical microservice. As a result, up-front capacity planning for databases is more important, as it isn’t going to be a good idea to rapidly add/remove 10 instances of a 5TB database.
Noisy Neighbors, Co-Residency, and HA
In an orchestrator, it can also be important to think about Affinity and Anti-Affinity rules to spread cluster members out across physical hardware. With microservices with many copies this may matter less — but if you’re running a 3-node clustered database for high availability, it is very important to ensure that not all 3 pods are running on the same physical server. If they are, and the physical server dies, then the database is completely down despite your attempts to guarantee high availability (HA)!
If all three nodes of your cluster live on the same physical server, and that server burns, you have no high availability (HA)!
Spreading things out (and avoiding co-residency) also helps to insulate you from so-called “Noisy Neighbors” or other workloads on the same machine that are soaking up shared resources.
Disks: Thin vs. Thick Provisioning
In virtualized environments, whether in the cloud or on-prem, when you create a persistent volume claim you take for granted that you get a disk and it “just works.” But how does that actually work?
Under the covers, your virtual disk is usually allocated from some kind of storage solution, or SAN. Many of those tend to use thin provisioning by default.
Over-allocation or over-subscription is a mechanism that allows a server to view more storage capacity than has been physically reserved on the storage array itself. This allows flexibility in growth of storage volumes, without having to predict accurately how much a volume will grow. Instead, block growth becomes sequential. Physical storage capacity on the array is only dedicated when data is actually written by the application, not when the storage volume is initially allocated. The servers, and by extension the applications that reside on them, view a full-size volume from the storage but the storage itself only allocates the blocks of data when they are written.
This is great for a wide range of applications but very bad for databases. It’s possible in some storage environments for your database to try to write data and to fail, because the underlying storage isn’t available. This can occur when the storage layer becomes over-subscribed and can be a catastrophic problem for the database, because its core underlying assumption is that it has a disk that it can use — which turns out not to be true. If you have a clustered Neo4j setup and this is only happening to one node, you probably won’t lose data (because the HA capabilities of the database are keeping the data safe on the other two nodes) but you’re going to see lots of strange errors which make no sense, and you’ll have operational issues.
The fix is to ensure that when you allocate disk, you’re doing it in a “thick” provisioning way, meaning that when the Persistent Volume is created, you’re guaranteed to have an exclusive hold on it. How you do this will differ between cloud environments and storage solutions; and so depending on your Kubernetes provider or method of hosting, this may or may not apply to you, but it’s something to keep a careful eye on.
Questions? Comments? Come discuss on the Neo4j Community Site on this thread.
Published at DZone with permission of David Allen, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.