Flex Up and Flex Down Database Capacity
This article isn’t about adding scale; it’s about removing it. It’s hard enough to have sufficient resources deployed, so why would you ever want to remove them?
Join the DZone community and get the full member experience.
Join For FreeWe’ve explored scaling up and out. Then, we talked about scaling in and down. MySQL is a very popular RDBMS, but challenges arise when your workload hits the limits of the largest node you can provision. But this third article isn’t about adding scale — it’s about removing it. From the technical/DevOps perspective, it’s hard enough to have sufficient resources deployed, so why would you ever want to consider removing them?
Everyone’s seen the “wall of shame” of tweets about major e-commerce sites suffering site slowdowns and outages during Black Friday and Cyber Monday. Here’s a brief recap just from the last decade:
2011: PC Mall, Newegg, Toys R’Us, Avon: 30+min outages. Walmart: 3hr outage.
2012: Kohl’s: repeated multi-hour outages.
2013: Urban Outfitters, Motorola: offline most of Cyber Monday.
2014: Best Buy: 2hrs+ total outages. HP, Nike: site crashes.
2015: Neiman Marcus: 4hr+ outage.
2016: Old Navy, Macy’s: multi-hour outages.
And there are similar “flash sales” (short duration, limited items, deep discount) all over the world, including China’s Singles Day and Flipkart’s Big Billion Day.
This is the very reason scale is needed… to avoid these kinds of high-impact outages. But hidden here is a big reason why this keeps happening.
Workloads with peaks waste a lot of resources during non-peaks. Ideally, capacity should scale elastically: deploy the capacity for when you need it, and scale it back when you do not. However, most RDBMSs cannot elastically shrink when they’re at scale.
Most RDBMS deployments don’t scale in/down easily. Single-node MySQL deployments can scale-up or down on DBaaS solutions like AWS RDS, Azure SQL, or Google Cloud SQL. But if your deployment leverages master/master (including certification replication-based solutions like MariaDB Galera Cluster or Percona XtraDB Cluster) or sharding, scaling the workload is tricky.
Each individual additional node in master/master won’t give linear write scale; instead, they give additional HA. So removing nodes doesn’t give the same amount of scale-in as actually shrinking each node, i.e., swapping each node for a smaller instance. And that kind of swapping requires bringing up separate nodes from backup, using replication to catch up, and then cutting over–a lot of effort.
Scaling in a sharded array is similarly complex. Partitions have to be consolidated between shards, application queries often have to be modified, and shard:data LUT routing has to be updated. Nearly everyone I’ve talked to who has deployed and/or supported sharded installations has confirmed: “We never try to scale back in.”
The result? It’s difficult to provision sufficient headroom for future peaks when scaling is one way. If it was up to DBAs and DevOps, every system would have enough headroom for the “unexpected.” This avoids service downtimes, frustrated users and stakeholders, and blown-up ticket queues. Unfortunately, DBAs and DevOps often don’t get to set their own budgets, and finance departments view “headroom” as excess capacity, i.e., wasted resources, which becomes the perennial “estimation game,” along the lines of:
DevOps: “We expect 30% more traffic than last year, so we should provision 50% more.”
Finance: “That sounds excessive. You already have half your servers underutilized. I’ll give you 35% more.”
So when the spike to 40% more comes, the site craters.
It’s important to design the ability to scale in/down when architecting scale for your MySQL deployment. Depending on your method of adding scale, your MySQL deployment will be able to scale in/down with various amounts of difficulty. Determining how exposed your workload is to seasonal peaks is key to budgeting sufficient hardware for those peaks, without having significant amounts of underutilized servers.
Published at DZone with permission of Dave Anselmi, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments