Scaling with RESTful Microservice Architecture
Scaling with RESTful Microservice Architecture
Join the DZone community and get the full member experience.Join For Free
The State of API Integration 2018: Get Cloud Elements’ report for the most comprehensive breakdown of the API integration industry’s past, present, and future.
[This article was written by Sam O'Brien.]
As described in a previous post on this blog, we have been using the Dropwizard framework to quickly develop high quality, easily testable, RESTful microservices to expand the functionality of our product. These complement the existing multi-instance services running in our cluster and contribute to the continued scaling of the Logentries service. Both in the ability to handle an ever increasing number of incoming logs, and by offering more ways to process, analyze, and view those logs.
The approach described above is known as microservice architecture, and is fast becoming a go-to solution for many enterprise applications, with implications for the nature of the services provided and the teams who develop and run those services.
Microservices can be characterized by the following traits:
- Size: A microservice is very targeted in functionality and scope. As such, it’s code base will be relatively small and manageable.
- Loosely coupled: By passing messages between services via an appropriate protocol, we decouple resources in the service from the underlying technologies and topologies.
- Self monitoring: Or at least easily testable.
- Continuous deployment: However this requires good Devops, automation, and acceptance testing practices and tooling.
- May even be disposable: The system they’re in may be long-lived but they may be short-lived.
By combining this model with RESTful interfaces, as described in our previous post on representational state transfer, we have enjoyed the benefits of both while developing some of our latest features. For example our shareable dashboards feature was developed and exposed in this way.
Benefits to this microservice model include:
- Focused code:Each service is designed to perform a particular task well. There will be less lines of code, and therefore be easier to read and develop in future. The implementation itself will be largely independent of other components in the system, bar the pre-defined messages to be passed, which allows the most applicable tools and frameworks to be used. The benefits of encapsulation we are familiar with in OOP, are mirrored in a way as part of the higher level abstraction of the full stack.
- Developer independence: Components which will eventually interact can be developed fairly independently once we know what the interface will look like and what information will be required on each end. This allows developers to focus on individual aspects of functionality with minimal (to no) overlap, which facilitates smooth, fast development.
- Continuous delivery: Deploying a new independent software service won’t interfere with currently deployed and running components.
- Testing: A smaller service is easier and quicker to test than a larger monolithic piece of software.
…One of the major benefits that we are concerned with is scalability, and designing an enterprise-level service as standalone microservices does this well. It allows individual services to be scaled across servers as their load demands increase.
As our customer base expands, and we process/ display more logs in even more ways, we have to scale our services to meet with ever growing demand.
Any company enjoying success in the market should hope, expect, and plan for this.
Breaking down functionality into individual services, which can work in parallel, has allowed us to meet this demand with multiple instances of services running on multiple virtual machines as required behind a load balancer. The use of the RESTful architecture style also allows us to use lessons learned from the largest distributed system in the world, the web, and apply them to our own growing system.
This model also suits resource allocation well. As more demanding services live on more powerful machines and in more instances, less resource-hungry services live on fewer machines with other less demanding services. As business demand grows for our new features, we can match demand in this way, whereas the alternative monolithic application model is less flexible.
As a developer, I’m enjoying working on new microservice instances which are effectively greenfield projects with well defined constraints on their interface, but with a flexible internal implementation and therefore a flexible set of tools and frameworks with which we can achieve our goal. As our development team grows in line with our market growth, these kind of practices are already demonstrating their value in flexibility, speed and ease of development.
Published at DZone with permission of Trevor Parsons , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.