DevOps in a Microservices World
DevOps in a Microservices World
Microservices are the new trend, but companies are already mobilizing development and IT teams into DevOps teams. Should businesses try to cover both movements?
Join the DZone community and get the full member experience.Join For Free
Learn more about how CareerBuilder was able to resolve customer issues 5x faster by using Scalyr, the fastest log management tool on the market.
Do we really need to contend with both DevOps and Microservices simultaneously? Isn’t one hype-fueled intiative enough of a challenge for an organization to take on, especially these days with new programming languages, databases, and frameworks popping up on a near daily basis? We’re already busy enough trying to keep up with the pace. Do we really need to morph our Ops team into a DevOps team while we simultaneoulsy wreak havoc on our sourcecode to deliver microservices? The short answers are yes, it depends, no, and it depends.
What DevOps is not
Contrary to popular belief, DevOps is not a role. So, to answer one of the questions above, no, you don’t need to morph your Ops team into a DevOps team. DevOps is a term that describes the mindset and the organizational structures that are necessary to deeply integrate Development and Operations. Ops will still be Ops and Devs will still be Devs. The difference is, these teams need to begin working closely together. And interestingly, DevOps seems to induce more changes in Development teams than it does in Operations staff.
What microservices is not
The term microservices isn’t without its own issues (no, I’m not talking about whether or not it’s really SOA 2.0). Microservices are a product of software architecture and programming practices. The potential mistake in developing microservices is ignoring the impact that microservices can have on Operations. Microservices architectures typically produce smaller, but more numerous artifacts that Operations is responsible for regularly deploying and managing. The term that describes the responsibilities of deploying microservices is microdeployments.
So, what DevOps is really about is bridging the gap between microservices and microdeployments. How can we do this in a way that makes daily work and processes more manageable? There are two points to consider here: your mindset and your tool set.
It all really starts with developers. As a dev, you should be aware that you’re laying the groundwork for a successful delivery pipeline. Regardless of the tools you use (that’s step two of the process), you should be open to the idea that you’re doing constructive preparation work for Ops. By relying on certain tools (for example, Maven or Gradle) or scripts for repetitive work, you’re implicitly documenting how processes are to be performed. If you adhere to the same quality standards for your scripts that you do for your product’s code, you’ll have an up-to-date specification available at all times.
As an Ops person, you should embrace the preparation work that’s performed by devs. I’ve heard Operations staff voice a fear that their jobs will become redundant if they no longer have the responsibilty of manually deploying new code into production. They assume there will be no work left for them to do, but this isn’t true. Ops personnel aren’t only responsible for manual deployments; they have plenty of other tasks to do. And, believe it or not, I’ve participated (remotely) in more than one deployment where an Ops person manually uploaded and deployed a WAR file from their laptop while sitting in a pub. And all went well...
So, making deployments easier and more predictable because of all the preparation work perfomed by devs isn’t something to complain about. Besides, instead of manually deploying other people’s code, Ops needs to run the continuous deployment infrastructure; so the time that’s saved in one area of responsibility is well invested into another.
Your tool set
Over the years, certain tools have become more or less obligatory. The first among these is version control for source code. All such tools (SVN, Git, Mercurial, etc.) have their advantages and disadvantages, but opting out of version control entirely isn’t an option.
Continuous Integration is the second link in the tool chain. An integral part of continuous integration is unit testing. It’s really hard to do unit testing well. I’m not saying this because I want to feign sympathy for all of you unit testing deniers, I’m saying this because unit testing is not about getting test coverage to 100%. The goal should be writing smart tests that discover as many problems as possible. And it’s about organizing your tests in a way that keeps them maintainable. It’s very likely that the amount of test code you have exceeds the amount of code that’s actually tested. So, getting your tests done right is about much more than just “writing tests.” I’m emphasizing the difficulty of testing because it’s something I myself haven’t quite mastered yet:
I’ll never forget how proud I was when my team started a new project and after the first sprint we had about 130 unit tests covering all our code (I was with a different company back then, btw). We had it all, both functional and behavioral tests with JUnit and Mockito.
During the course of the following sprints though, those 130 tests began to decay in value. The way we organized and built the tests didn’t allow us to scale the tests in accordance with the growing code. And we clearly underestimated the amount of time required to maintain the tests. Lacking the necessary test coverage forced us to perform loads of manual testing—much more than what we’d have needed if we’d created the right set of tests in the first place. And despite our manual testing efforts, the final product quality wasn’t even that great. We subsequently had to deploy lots of hotfixes. I think the only reason it worked at all was because we only deployed new code once per sprint. It wouldn’t have worked if we were to deploy more often.
Long story short: getting your unit tests done right helps you take advantage of continuous integration. Unit tests done incorrectly leads to failure in your continuous integration pileline.
Continuous delivery is the next link in the tool chain. You should strive to enable yourself to deploy artifacts with a single mouse click, or even automatically. As a developer, you’ll need this for development in your staging environment anyway.
Having continuous deployment facilities set up allows you not only to do your deployments with less effort, it also allows you to do cool stuff like fully automated, timed deployments at 4 AM when there’s minimal traffic. Or you might even be able to trigger deployments from your favorite pub without your laptop because your mobile phone can do the job, too.
Continuous monitoring isn’t an established industry term yet, but it perfectly fits in with the other “continuous” initiatives like integration and deployment. Continuous monitoring shouldn’t be interpreted simply as 24/7 monitoring. It also implies depth of monitoring coverage. Continuous monitoring monitors the full application stack, from in-browser front-end performance metrics, through application performance, and down to host and virtualized infrastructure metrics.
As with continuous integration, where notifications about failed builds include information about who caused the trouble and in which piece of code, problem notifications from continuous monitoring should intelligently correlate performance incidents from across the application stack and report them in an actionable form that facilitates root cause analysis.
Or, as with continuous delivery, where you can trigger actions with a single mouse click, continuous monitoring should allow you to get all necessary information with a single mouse click. This means that the user interface of your monitoring solution should comprehensively display all necessary information. You’ll want to have your continuous monitoring UI available not only on your desktop, but also on your mobile device and tablet. And when performance problems appear in your environment, your continuous monitoring solution shouldn’t just provide data, it should include all the relevant information you need to get started on problem resolution.
Value of a common DevOps toolset
A common toolset for Devs and Ops can really help in establishing common terminology and processes for requirements, dependencies, and problems. If, while discussing a challenge Devs and Ops have the same tools and processes in mind, chances are that they’ll have an easier time understanding and working with one another. And if Devs and Ops are working jointly on a problem and don’t need to argue about who’s task it is to fix a build configuration or build script, chances are that your organization will have no trouble supporting the latest hype-fueled initiative alongside their other daily challenges.
To answer the first question, whether we really need to deal with both microservices and DevOps, I can only say that if you're already doing DevOps, your team should be set up perfectly for the microservice challenge. If your primary intent is to do microservices, then DevOps is the way how your teams should work together. DevOps and microservices both are pretty good things, but they really work best if applied together.
Opinions expressed by DZone contributors are their own.