Large Scale Distributed Consensus Approaches: Large Data Sets
Join the DZone community and get the full member experience.
Join For Freein my previous post, i talked about how we can design a large cluster for compute bound operations. the nice thing about this is that is that the actual amount of shared data that you need is pretty small, and you can just distribute that information among your nodes, then let them do stateless computation on that, and you are done.
a much more common scenario is when can’t just do stateless operations, but need to keep track of what is actually going on. the typical example is a set of users changing data. for example, let us say that we want to keep track of the pages each user visit on our site. (yes, that is a pretty classic big table scenario, i’ll ignore the prior art issue for now). how would we design such a system?
well, we still have the same considerations. we don’t want a single point of failures, and we want to have very large number of machines and make the most of their resources.
in this case, we are merely going to change the way we look at the data. we still have the following topology:
there is the consensus cluster, which is responsible for cluster wide immediately consistent operations. and there are all the other nodes, which actually handle processing requests and keeping the data.
what kind of decisions do we get to make in the consensus cluster? those would be:
- adding & removing nodes from the entire cluster.
- changing the distribution of the data in the cluster.
in other words, the state that the consensus cluster is responsible for is the entire cluster topology. when a request comes in, the cluster topology is used to decide into which set of nodes to direct it to.
typically in such systems, we want to keep the data on three separate nodes, so we get a request, then route it to one of those three nodes that match this. this is done by sharding the data according the the actual user id whose page views we are trying to track.
distributing the sharding configuration is done as described in the compute cluster example, and the actual handling of requests, or sending the data between the sharded instances is handled by the cluster nodes directly.
note that in this scenario, you cannot ensure any kind of safety. two requests for the same user might hit different nodes, and do separate operations without being able to consider the concurrent operation. usually, that is a good thing, but that isn’t always the case. but that is an issue of the next post.
Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Comparing Cloud Hosting vs. Self Hosting
-
How To Scan and Validate Image Uploads in Java
-
How To Use Pandas and Matplotlib To Perform EDA In Python
-
Does the OCP Exam Still Make Sense?
Comments