CountMin Sketch  A Data Structure for Stream Mining Applications
Join the DZone community and get the full member experience.
Join For FreeIn today's age of Big Data, streaming is one of the techniques for low latency computing. Besides the batch processing infrastructure of map/reduce paradigm, we are seeing a plethora of ways in which streaming data is processed at near real time to cater to some specific kinds of applications. Libraries likeStorm, Samza and Spark belong to this genre and are starting to get their share of user base in the industry today.
This post is not about Spark, Storm or Samza. It's about a data structure which is one of the relatively new entrants in the domain of stream processing, which is simple to implement, but has already proved to be of immense use in serving a certain class of queries over huge streams of data. I have been doing some readings about the application of such structures and thought of sharing them with the readers of my blog.
Using Sublinear Space
Besides data processing, these tools also support data mining over streams that include serving specialized queries over data using limited space and time. Ok, so once we store all data as they come we can always serve queries with O(n) space. But since we are talking about huge data streams, it may not even be possible to run algorithms on the full set of data  it simply will be too expensive. Even if we have the entire set of data in a data warehouse, the processing of the entire data set may take time and consume resources that we cannot afford to have, considering the fee charged under the evolving models of using the platformasaservice within the cloud based infrastructure. Also the fact that these algorithms will be working on data streams, there's a high likelihood that they will get to see these data only in a single pass. The bottom line is that we need to have algorithms that work on sublinear space.
Working on sublinear space implies that we don't get to store or see all data  hence an obvious conclusion from this will be the fact that we also don't get to deliver an accurate answer to some queries. We rely on some approximation techniques and deliver an accuracy with a reasonably high probability bound. We don't store all data, instead we store a lossy compressed representation of the data and deliver user queries from this subset instead of the entire set.
One widely used technique for storing a subset of data is through Random Sampling, where the data stored is selected through some stochastic mechanism. There are various ways to determine which data we select for storing and how we build the estimator for querying the data. There are pros and cons with this approach, but it's one of the simplest ways to do approximation based queries on streaming data.
There are a few other options like Histograms and Wavelet based synopses. But one of the most interesting data structures that have been developed in recent times is the Sketch, which uses summary based techniques for delivering approximation queries, gets around the typical problems that sampling techniques have and are highly parallelizable in practice.
An important class of sketch is one where the sketch vector (which is the summary information) is a linear transform of the input vector. So if we model the input as a vector we can multiply it by a sketch matrix to obtain the sketch vector that contains the synopses data that we can use for serving approximation queries. Here's a diagrammatic representation of the sketch as a linear transform of the input data.
CountMin Sketch
One of the most popular forms of the sketch data structure is the CountMin Sketch introduced by Muthukrishnan and Cormode in 2003. The idea is quite simple and the data structure is based on probabilistic algorithms to serve various types of queries on streaming data. The data structure is parameterized by two factors  ε and δ, where the error in answering the query is within a factor of ε with probability δ. So you can tune these parameters based on the space that you can afford and accordingly amortize the accuracy of results that the data structure can serve you.
Consider this situation where you have a stream of data (typically modeled as a vector) like updates to stock quotes in a financial processing system arriving continuously that you need to process and report statistical queries on a real time basis.
for i = 1 to d
v = h(i)(a[i_{t}]) // v is between 1 and w
count[i, v] += c_{t} // increment the cell count by c_{t}
end
At any point in time if we want to know the approximate value of an element a[i] of the vector a, we can get it from computing the minimum of all values in each of the d cells of count where i hashes to. This can be proved formally. But the general intuition is that since we are using hash functions there's always a possibility of multiple i's colliding on to the same cell and contributing additively to the value of the cell. Hence the minimum among all hash values is the closest candidate to give us the correct result for the query.
The figure above shows the processing of the updates in a CountMin sketch. This is typically called the Point Query that returns an approximation of a[i]. Similarly we can use a CountMin sketch to get approximation queries for ranges which is typically a summation over multiple point queries. Another interesting application is to serve inner product queries where the data structure is used to query inner products of 2 vectors, a typical application of this being the estimation of join sizes in relational query processing. The paper Statistical Analysis of Sketch Estimators gives all details of how to use sketching as a technique for this.
CountMin sketches have some great properties which make them a very useful data structure when processing distributed streams. They have associativity properties and can be modelled as monoids and hence terribly performant in a distributed environment where you can parallelize sketch operations. In a future post I will discuss some implementation techniques and how we can use countmin sketches to serve some useful applications over data streams. Meanwhile Twitter's algebird and ClearSpring's streamlib offer implementations of CountMin sketch and various other data structures applicable for stream mining applications.
Published at DZone with permission of Debasish Ghosh, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending

The Role of Automation in Streamlining DevOps Processes

Demystifying SPF Record Limitations

Application Architecture Design Principles

Deep QLearning Networks: Bridging the Gap from Virtual Games to RealWorld Applications
Comments