Sam Newman published a book on Microservices. He might be the first ThoughtWorker to do so. I think there are others coming. Those guys helped define the science of course. Anyway I bought Sam’s book as I need to play catch up – I’m not even ‘on the fence’ about Microservices, and certainly don’t think a popular desciption “micro services or monilith” is fair. This article isn’t about Sam’s book, it’s about a pro-microservices thought that has been bouncing around in my head for a while.
With lots of little services, you get the ability to measure costs to the nth degree, then charge this back to a very fine-grained level. Amazon’s “Route 53 pricing” is an example of that, though I don’t know if it is a microservice. Amazon have that as a service for their customers, of course, not their own assets in production.
I used the Java version of Google’s AppEngine from the outset, and I felt that the first version (and the subsequent 10 tweaks) of the charging model were imperfect. Java VMs have a large startup time, and with a WAR-file type of deployment, it’s hard to work out what the real costs are on a per-invocation basis. Maybe Sun should have made for cost-accounting down to the thread level in Java. Maybe Sun should have made AppEngine themselves in 1998 when they were up to speed with the Cobalt purchase. With the per-invocation charge-back capability, a single JVM could have been put into work for multiple WAR files. WAR files from different customers, I mean. Sun were hippies without business sense, and that led to squandered opportunities.
Microservices, can be instantiated from nowhere on-demand, do their thing for a request, then dissapear. The instantaneousness of that allows you to work out how may a hour you could do with one computing node, and (after some padding for inefficiencies) work out what to charge back to some accounting code. In my opinion, that is neat.
In case my readers are interested, I still prefer “cookie cutter scaling”.