Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

When PostgreSQL Doesn't Scale Well Enough

DZone's Guide to

When PostgreSQL Doesn't Scale Well Enough

You might think you will hit your limits with standard databases pretty quickly, but you'll be surprised how far out those limits really are.

· Database Zone
Free Resource

Running out of memory? Learn how Redis Enterprise enables large dataset analysis with the highest throughput and lowest latency while reducing costs over 75%! 

The largest database I have ever worked on will eventually, it looks like, be moved off PostgreSQL.  The reason is that PostgreSQL doesn't scale well enough. I am writing here, however, because the limitations are so extreme that it ought to give plenty of ammunition for those who think databases don't scale.

The current database size is 10TB and doubling every year. The main portions of the application have no natural partition criteria.  The largest table currently is 5TB and is the fastest growing portion of the application.

10TB is quite manageable. 20TB will still be manageable. By 40TB, we will need a bigger server. But in 5 years, we will be at 320 TB, and so the future does not look very good for staying with PostgreSQL.

I looked at Postgres-XL and that would be useful if we had good partitioning criteria but that is not the case here.

But how many cases are there like this? Not too many.


EDIT:  It seems I was misunderstood. This is not complaining that PostgreSQL doesn't scale well. It is about a case that is outside of all reasonable limits.
Part of the reason for writing this is that I hear people complain that the RDBMS model breaks down at 1TB which is hogwash. We are facing problems as we look towards 100TB. Additionally, I think that PostgreSQL would handle 100TB fine in many other cases, but not in ours. PostgreSQL at 10, 20, or 50TB is quite usable even in cases where big tables have no adequate partitioning limit (needed to avoid running out of page counters), and at 100TB in most other cases I would expect it to be a great database system. But the sorts of problems we will hit by 100TB will be compounded by the exponential growth of the data (figure within 8 years we expect to be at 1.3PB). So the only solution really is to move to a big data platform.

Running out of memory? Never run out of memory with Redis Enterprise databaseStart your free trial today.

Topics:
postgres

Published at DZone with permission of Chris Travers, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}