Over a million developers have joined DZone.

Approximation Algorithms for Your Database

DZone 's Guide to

Approximation Algorithms for Your Database

You can't always get what you want, but if you try, you just might find an approximation algorithm to get you pretty close.

· Database Zone ·
Free Resource

In an earlier blog post, I wrote about how breaking problems down into a MapReduce style approach can give you much better performance. We’ve seen that Citus is orders of magnitudes faster than single node databases when we’re able to parallelize the workload across all the cores in a cluster. And while count (*) and avg is easy to break into smaller parts, I immediately got the question what about count distinct or the top from a list or median.

Exact distinct count is admittedly harder to tackle in a large distributed setup because it requires a lot of data shuffling between nodes. Count distinct is indeed supported within Citus, but at times can be slow when dealing with especially larger datasets. Median across any moderate to large size dataset can become completely prohibitive for end users. Fortunately, for nearly all of these, there are approximation algorithms that provide close-enough answers and do so with impressive performance characteristics.

Approximate Uniques With HyperLogLog

In certain categories of applications such as web analytics, IoT (internet of things), and advertising, counting the distinct number of times something has occurred is a common goal. HyperLogLog is a PostgreSQL data type extension that allows you to take the raw data and compress it into a value of how many uniques exist for some period of time.

The result of saving data into the HLL datatype is that you would have a value of 25 uniques for Monday and 20 uniques for Tuesday. This data compresses down much more than the raw data. But where it really shines is that you can then combine these buckets by unioning two HyperLogLog data types, and you can get back that there were 25 uniques on Monday and Tuesday because on Tuesday, you had 10 repeat visitors:

SELECT hll_union_agg(users) as unique_visitors
FROM daily_uniques;
(1 row)

Because HyperLogLog can be split up and composed in this way, it also parallelizes well across all nodes within a Citus cluster

Finding a List of Top Things With TopN

Another form of counting that we commonly find in web analytics, advertising applications, and security/log event applications is wanting to know the top set of actions or events that has occurred. This could be the top page views you see within google analytics, or it could be the top errors that occurred from event logs.

TopN leverages an underlying JSONB datatype to store all of its data. But it then maintains a list of which are the top items and various data about the items. As the order reshuffles, it purges old data, allowing it to now have to maintain a full list of all of the raw data.

In order to use it, you’ll insert into it in a similar fashion to HyperLogLog:

# create table aggregated_topns (day date, topn jsonb);
Time: 9.593 ms

# insert into aggregated_topns select date_trunc('day', created_at), topn_add_agg((repo::json)->> 'name') as topn from github_events group by 1;
Time: 34904.259 ms (00:34.904)

And when querying, you can easily get the top ten list for your data:

SELECT (topn(topn_union_agg(topn), 10)).* 
FROM aggregated_topns 
WHERE day IN ('2018-01-02', '2018-01-03');

 dipper-github-fra-sin-syd-nrt/test-ruby-sample |     12489
 wangshub/wechat_jump_game                      |      6402

More Than Just Counts and Lists

We mentioned earlier that an operation like median can be much harder. And while an extension may not exist yet, there is a future that can support these operations. For median, there are multiple different algorithms and approaches that exist. Two interesting ones that could be applied to Postgres:

  • T-digest — provides approximate percentiles
  • HDR (high dynamic range) — Offers better compression but focuses more on just the top 99th and up percentile

Does an answer that is quite close but not perfectly exact meet your needs if it gives you a sub-second response across Terabytes of data? In my experience, the answer is often yes.

So, the next time you think something isn’t possible in a distributed setup, explore a bit on what approximation algorithms exist.

sql ,database ,citus ,algorithms ,approximation

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}