DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Data Engineering
  3. Data
  4. Scaling Microservices: Advanced Approaches With the AKF Scaling Cube

Scaling Microservices: Advanced Approaches With the AKF Scaling Cube

Scaling applications, whether they be microservices or monoliths, can be achieved through practical approaches to architecture, design, and implementation.

Kevin Crawley user avatar by
Kevin Crawley
·
Jun. 24, 19 · Analysis
Like (3)
Save
Tweet
Share
6.18K Views

Join the DZone community and get the full member experience.

Join For Free

Once you've optimized your services to utilize the resources which are available for a single process or node you must now consider how to best approach your scaling efforts. None of the methods are trivial and we will not delve into the endless discussion as to which is the best. The approach you ultimately take will be based on your current architecture, constraints, and resource availability.

We will be distilling three different approaches based on the AKF scaling cube: duplicate your entire system (horizontal replication); decompose your system into individual functions, services, or resources (service and functional splits); and split your system into individual pieces (lookup or formulaic splits).

The AKF scaling cube was conceptualized by Abbott, Keeven, and Fisher and it visualizes these categories as x-, y-, and z-axis.

x: Horizontal Duplication

We'll cover the x-axis first, or horizontal duplication, which is the replication we've touched on briefly in the previous blog posts. This method of scaling involves creating many replicas of an individual service to provide additional processing power. An example of this method of scaling is creating additional workers to complete a task or job.

This method of scaling is reduced in effectiveness by complex data or transactional systems. As long as every request can be completed independently across all replicas. Modern cloud native architecture and twelve-factor app design encourages building services in this manner due not only to the scalability aspects, but the reliability of this approach as well.

The CAP theorem tells us that when building our systems, we must sacrifice one of the following principles when implementing distributed applications: consistency, availability, or partition tolerance. If our applications must remain consistent when writing data, then all available replicas will need to wait before writing their data. In most cases, you will want to rely upon external systems to handle data persistence because they have built solutions which handle these issues pragmatically.

y: Functional or Service Splits

With this approach, we're allocating specific resources and capacity to individual functions or "domains" so it can have resources dedicated for that task or service. The initial approach to this type of split in a traditional SOA is to make sure our database, web server, and application servers are provisioned on their own dedicated systems. This method of scaling has its own limitations, which is why we are now in the microservice era. Nevertheless, the same principles apply, a database server is going to consume hardware resources in a much different manner than a web application server.

With service splits, we are taking a more orthogonal approach to our architecture. A straightforward example of this is to separate the transactional portion of a web application from the reporting functionality. This split allows us to keep resources dedicated to serving live users while a separate process or service handles analytics and reporting.

Similar to the benefits gained by splitting our web application server and the database, this type of scaling also gives us the benefit of allocating resources that are more suited for the workload. For instance, a transaction system may require substantially more CPU and network I/O than the reporting sub-system which requires additional disk I/O. By segregating these systems, we can tailor the hardware to the task and ultimately save money by not allocating unnecessary resources.

This method of scaling is what you'll focus most of your time on unless you start moving into huge web applications that handle upwards of several million transactions per day or heavy data analytics. Here are just a few of the approaches you may encounter when building services using this approach:

  • Splitting by functionality, with each "function" dedicated its own resources (web server, db server, cache server, etc).
  • Splitting by service or "domain", with each service on its own pool of resources.
  • Splitting by transaction type.
  • Splitting by user (or tenant).

z: Lookup-Oriented Split

Lookup-oriented splits up a system by segmenting the data into chunks or segments, those segments are then given dedicated resources. The necessity of z-axis splits come to bear when the data sets which are being handled by services becomes too large for a single instance — this method of scaling is often referred to as "sharding."

A common method of performing this type of split is by dividing a table by the auto-incrementing id field, this can be done programmatically by virtue of assigning a particular record a database shard. You can read more about this approach on how Pinterest scales their database on their engineering blog post. This approach suites very large systems which must support many terabytes of transactional data — there are far more pragmatic approaches for less data intensive applications.

More practical approaches may include segmenting your data by date, customer, or region. Each approach has its advantages, for example:

  • Date: Each year could have its own database/machine, with non-current years being allocated less resources and optimized for read-only access.
  • Customer: Each customer is different, and one may require more resources than another, by allocating a database per customer we can assign capacity in a more structured manner.
  • Region: Regional databases ensure access remains available in the event of an outage in another data center, this is allowing allocated capacity based on each region's usage. For instance, we may reduce capacity during the evening in the US but in APAC we need to ramp up capacity.

Regardless of the method you choose to split your data will require significant refactoring of your applications. Since this approach requires significant amounts of time and engineering resources great care should be taken to decide on which approach you ultimately take. Since scaling on the z-axis is considered the most difficult it is typically only done when the x- and y- axis have been exhausted.

Summary

Scaling applications, whether they be microservices or monoliths, can be achieved through practical approaches to architecture, design, and implementation. Each approach requires significant investment for proper research and development. There is no magic bullet, but there is a method which can be used to great effect without a tremendous amount of refactoring or rewriting in the event your services weren't designed with regards to scaling. That approach will be covered in-depth in our next article about caching. Stay tuned.

Scaling (geometry) microservice Database application Web Service Data (computing) Web application

Published at DZone with permission of Kevin Crawley, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Best Practices for Setting up Monitoring Operations for Your AI Team
  • Securing Cloud-Native Applications: Tips and Tricks for Secure Modernization
  • Beyond Coding: The 5 Must-Have Skills to Have If You Want to Become a Senior Programmer
  • Key Elements of Site Reliability Engineering (SRE)

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: