Tools and Techniques for Building Microservices
This article follows a path from planning to developing and testing your microservices architecture and recommends tools for each step of the way, often open-source.
Join the DZone community and get the full member experience.
Join For FreeMicroservices, or microservices architecture, is a software designing technique. It is an architectural style which structures an application as a collection of loosely-coupled services. It has many benefits, like improving modularity and making developers life easy by making developing, testing, and debugging steps easy. It also helps in CI/CD. This article will be mainly focused on RESTful microservices, however, most of the tools/techniques can be used regardless of languages/architectures.
Now let us see a few techniques and available tools:
Technology Choice: Each microservice can be implemented in any programming language and can use different infrastructures. The main technology choices are the way microservices communicate (synchronous, asynchronous, etc.) and which protocol they use (REST, messaging, etc.). Based on the service requirements, we need to choose the communication mechanism and protocol. The architectural component can be broadly categorized into 1) API gateway, 2) load balancer, 3) service discovery, 4) service, and 5) database/cache. This page talks about the tech stacks different organizations are using and can be used as a reference.
Documentation: All of us know how important is to document the architecture and design of any service, but we often get confused about what and how to document. There are many templates available; one of them is arc42, a free open-source tool. Apart from architecture documentation, if the service is exposing an API, there are tools available like Swagger, Apiary, and ReDoc, which help us in generating documentation automatically.
Development: The development process is similar to any other kind of application development. Any IDE of the developer's choice, like Eclipse or IntelliJ, a text editor like atom (open-source) or sublime text, and any of the version control systems from the client-server model (svn, perforce) or distributed model (Git, Visual Studio Team Service) can be used. To build and run the tests, we need a software project management tool like Maven, Ant, etc. There are open-source tools like Nexus and Artifactory for storing the generated artifacts. In order to automate the build and tests, we need to use an automation tool like Jenkins or Bamboo.
Code review: Code review is a systematic examination of source code written in any languages. Code reviews are carried out to check for obvious logical errors, fulfill the requirements, confirm best practices, etc. Reviews can be achieved by pair programming, informal walkthroughs, or a formal review process. It is always good to have a formal process for review. Collaborator (free for a team size of 10) from SmartBear is software with support for almost all VCS (SCM) like Git, Subversion, Perforce, and ClearCase, and is available for Windows, Linux, and Mac. Crucible is another popular tool from Atlassian which supports VCS like Git, svn, CVS, Perforce, etc. Gerrit and Phabricator are two among many free/open-source code review tools. Apart from these, we should also focus on continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs, code smells. etc. with help of tools like Sonarqube and PMD.
Logging: Logging is one of the most important aspects of any service. For any service, we need both the access log and the service log. If we only store the log, it doesn't add any value unless we have some mechanism to analyze these logs and make sense out of them.
Access log: Usually, all applications/web servers provide an access and error log. The access log keeps track of an incoming request, its parameter, host, response status, etc. and the error log keeps track of errors.
Service log: This log can be stored and processed within each service or from infrastructure, however, logs need to be generated from each service. While writing the logging logic, we should consider adding the time, source name (class method name, etc.), severity, and relevant content such as message, stack trace, etc. That way, when we see a log statement, we know which service generated the log event and where the service the event was generated. The problem now is finding out which actions led up to the event. We need a way of tracing a series of events back to the source, even if it means traversing multiple services. The solution for this is to use a unique identifier when a request enters the architecture and carry the same identifier until the request finishes. MDC (Mapped Diagnostic Context) is an instrument for distinguishing interleaved log output from different sources. Log output is typically interleaved when a server handles multiple clients near-simultaneously.
Within services: Maintaining a log lifecycle within service has advantages; it's completely independent of other services and can choose any logging strategy which suits it best. At the same time, it has disadvantages; every service needs to implement a logging strategy, which is redundant and leads to complexity in changing logging behavior among various services.
From Infrastructure: In this approach, each service sends logs to a central service and the central service knows how to process, store, or send it some other log server.
Viewing logs: Simply grepping logs is not the right solution to viewing logs. There are some tools available which help in viewing, searching, and analyzing logs in a much easier way. Splunk and Kibana (from the ELK stack) are two such famous tools. Spring Cloud Sleuth is a Spring Cloud project which is based on the MDC (Mapped Diagnostic Context) concept, where you can easily extract values put in context and display them in the logs. Zipkin is a distributed tracing system which helps gather timing data needed to troubleshoot latency problems.
Testing: Along with unit tests, it is very important to have integration tests covering all scenarios. We might choose any of the approaches from TDD to BDD or ATD for development. Tools like Randoop and junit-tools help in generating unit tests in Java when we write tests after coding, and rest-assured, Postman, Karate, and Zerocode help in writing integration tests. This article describes a few of them.
Continuous integration and continuous delivery (CI/CD): CI and CD are a key requirement for achieving success with microservices. Without a good CI/CD process, we will not achieve the agility that microservices promise. When we talk about CI/CD, we are really talking about several related processes: continuous integration, continuous delivery, and continuous deployment. There are various tools available to achieve these. XebiaLabs's periodic table provides a beautiful view of tools available in the industry.
Performance Testing: Beside unit and integration tests, we should perform other types of tests like load and performance tests. A good example of such a tool is Apache JMeter. Blazemeter is another tool which allows you to set your target KPIs as failure criteria and tracks performance over time and combines multiple tests to run as one while also maintaining granular reporting. To resolve performance bottlenecks, pin down memory leaks, and understand threading issues we can use an application profiler like jProfiler.
Monitoring: One of the most frequently discussed challenges related to microservices is monitoring. Along with knowing whether the service is responding or not, it's also worth knowing other parts of the system like databases, message brokers, etc. are working correctly. Apart from these, we would be interested in getting various metrics like the number of processed requests, throughput, load, number of errors, etc. To gather statistics (metrics) of individual operations of a service, we need to instrument a service using instrumentation libraries like Coda Hale/Yammer Jave Metrics Library or Prometheus client libraries. After collecting metrics, we can monitor them using Grafana, Prometheus or AWS Cloudwatch.
So far, we have discussed various technologies and tools available as of today, but the world is changing rapidly and knowing only the established technology isn't enough anymore. We must look beyond today's solutions and be ready for what lies ahead.
To keep ourselves up to date with latest technologies and tools we should be doing things like:
Making use of web, print, and social media.
Attending training.
Getting your hands dirty.
Attending group meetings or conferences.
Contributing to open source.
Opinions expressed by DZone contributors are their own.
Trending
-
Decoding ChatGPT: The Concerns We All Should Be Aware Of
-
Core Knowledge-Based Learning: Tips for Programmers To Stay Up-To-Date With Technology and Learn Faster
-
Top 10 Pillars of Zero Trust Networks
-
The Role of Automation in Streamlining DevOps Processes
Comments