DZone
Database Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Database Zone > When PostgreSQL Doesn't Scale Well Enough

When PostgreSQL Doesn't Scale Well Enough

You might think you will hit your limits with standard databases pretty quickly, but you'll be surprised how far out those limits really are.

Chris Travers user avatar by
Chris Travers
·
Apr. 18, 16 · Database Zone · Opinion
Like (10)
Save
Tweet
12.02K Views

Join the DZone community and get the full member experience.

Join For Free
The largest database I have ever worked on will eventually, it looks like, be moved off PostgreSQL. The reason is that PostgreSQL doesn't scale well enough. I am writing here, however, because the limitations are so extreme that it ought to give plenty of ammunition for those who think databases don't scale.

The current database size is 10TB and doubling every year. The main portions of the application have no natural partition criteria.  The largest table currently is 5TB and is the fastest growing portion of the application.

10TB is quite manageable. 20TB will still be manageable. By 40TB, we will need a bigger server. But in 5 years, we will be at 320 TB, and so the future does not look very good for staying with PostgreSQL.

I looked at Postgres-XL and that would be useful if we had good partitioning criteria but that is not the case here.

But how many cases are there like this? Not too many.


EDIT:  It seems I was misunderstood. This is not complaining that PostgreSQL doesn't scale well. It is about a case that is outside of all reasonable limits.
Part of the reason for writing this is that I hear people complain that the RDBMS model breaks down at 1TB which is hogwash. We are facing problems as we look towards 100TB. Additionally, I think that PostgreSQL would handle 100TB fine in many other cases, but not in ours. PostgreSQL at 10, 20, or 50TB is quite usable even in cases where big tables have no adequate partitioning limit (needed to avoid running out of page counters), and at 100TB in most other cases I would expect it to be a great database system. But the sorts of problems we will hit by 100TB will be compounded by the exponential growth of the data (figure within 8 years we expect to be at 1.3PB). So the only solution really is to move to a big data platform.

PostgreSQL

Published at DZone with permission of Chris Travers, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Kubernetes Data Simplicity: Getting Started With K8ssandra
  • Discussing Backend for Front-end
  • Auth0 (Okta) vs. Cognito
  • 7 Best MQTT Client Tools Worth Trying

Comments

Database Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo