Disposable Architecture: The Tiny Microservice
Disposable Architecture: The Tiny Microservice
The dawn of microservices won't just allow us to develop and deploy faster; they could mean the end of legacy code as we know it.
Join the DZone community and get the full member experience.Join For Free
Discover how you can get APIs and microservices to work at true enterprise scale.
The moment code is written it becomes legacy code, something that needs to be maintained and that another developer probably wants to rewrite. Developers love to rewrite other peoples code. When handing a project over to a new developer, management can often expect a sucking of the teeth along with the painful words “this is going to need a complete rewrite.”
Whilst this may be a (slight) exaggeration, it’s something we’ve all experienced. It’s a problem that is exacerbated by monolithic architectures and large codebases, which are difficult to modify and untangle. When the code base has a wide array of responsibilities it can be scary to make changes for fear of breaking something else, or just because the code is a big ball of mud. If we make a mistake on a monolith, such as a bad technology choice or a design flaw, the problem is multiplied exponentially because of the amount of code that will be exposed to it.
Fortunately, as an industry, we’re moving towards microservices. A collection of smaller processes which can be released independently of each other. This means less standing on other peoples toes, more regular releases, and a step closer to the dream of CI and CD.
One of the big questions surrounding microservices is “how micro is a microservice?” My favorite answer to this is from Nic Ferrier, who, to paraphrase, says that a microservice should be re-writeable in about 2 weeks, give or take. Whilst this may sound bonkers at first, with this concept brings a lot of benefits.
If I can rewrite any service in 2 weeks it gives me a lot of room for experimentation. Want to try writing it in Clojure? Fill your boots. Want to try a new caching tech because it’ll help with current issues? Give it a go. If you try something and it goes wrong the biggest mistake window is 2 weeks. This is nothing in the lifetime of a project, and paltry compared to the amount of time digging through a “perfectly” up-front designed monolith.
There’s more. If you can build a subsystem in 2 weeks that inherently stops a lot of bad practices. You can’t put out a client library in that time and get people to upgrade, so instead it’s necessary to rely on standards such as REST. You can’t build a giant Oracle database in 2 weeks, so instead you can subset your data appropriately into your own datasource that is appropriate for your needs. It’s much easier to deploy regularly and rapidly if your microservice is the master of its own data. The smaller the codebase, the faster it will be to iterate and develop.
How many times have you had a system require some level of rewrite because early on a technical decision was made, such as “let’s use hibernate!” only for that to then become a blocker and an expensive decision to retreat from. The ability to back out of a technology decision quickly and safely is much more important than the choice of technology, and disposable architecture limits our risk as rewrite becomes cheap. We should strive not to get solutions perfect up front as accurately predicting the future is impossible, but instead to ensure that we can adapt and change quickly based on new information as we learn more about the problem.
This is disposable architecture. There is no such thing as legacy code in this style of system as any service is small enough that it’s responsibility will always be clear: “This service has all the APIs for accessing film data.” There’s no feature creep, it doesn’t do “APIs for film data, oh and also has actor data, and there’s this other thing we had to shove in there as a feature request this one time.” The interactions are clear and can be understood quickly, and therefore can be replicated and replaced.
To achieve this level of system also forces a level of maturity to your devops operation. Whilst the individual systems become less complex, the overall system becomes a large collection of small systems which introduces a management overhead. There has to be a well understood way of monitoring the microservices in place that a new system can easily hook into. There has to be a level of maturity from the development team to balance YAGNI (You Aren’t Gonna Need It), and JFDI (Just Flipping Deploy It), with the fact that the application will need to be scalable and need to meet certain performance requirements. However, I can guarantee it’s much easier to scale a small, focused application in any direction than it is to sneak more performance out of a large application.
Opinions expressed by DZone contributors are their own.