From Micro Cloud to Micro Cluster
From Micro Cloud to Micro Cluster
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
The most exciting thing for me about the Stackato 3.4 release has nothing to do with new product features. Starting with this release, the Stackato Micro Cloud License has been extended to allow the creation of clusters with up to 20GB of RAM without any license fees.
In other words, you can build a small-scale production Platform-as-a-Service with Stackato for free on your own infrastructure or your favorite cloud hosting provider.
"A single VM is not a cloud"
I had the daunting privilege of demonstrating Stackato 1.0 for Guido van Rossum at PyCon 2012 when he stopped by the ActiveState booth. I started with my standard Stackato elevator pitch, but he stopped me when I referred to the Stackato VM as a "micro cloud", objecting to the notion that a single VM running on my laptop could be described as a cloud.
He was right, of course, but I countered with an explanation of how the VM contained all the roles necessary to build a cluster: spawn a number of Stackato VMs on your cloud infrastructure, connect them together, and assign each one specific roles. The platform works essentially the same way whether it's running as a single node or a big cluster, but you get all the "cloudy" advantages of scalability and fault tolerance as you move to the more distributed setup.
He seemed somewhat satisfied by this explanation, but the problem with our definition of "micro cloud" stuck with me. A cloud, however micro, needs to be more than a single node.
Cloud Computing for the People!
ActiveState's slogan from our early days was "Programming for the People", and that ethos is still alive and well here. With me at the PyCon booth was Ingy döt Net, who had advocated strongly for a free license/distribution of Stackato prior to its official release. His feeling was, and I completely agree, that Stackato needed to be in as many hands as possible for it to be a success.
The single-node, non-production provisions in the license allowed individual developers to use Stackato as a testbed, but it wasn't useful for the IT people who wanted to try it out at a scale that would be useful for more than one person.
A small cluster implementation is where Stackato really starts to shine, so we've made that free too.
What a 20GB cluster might look like
The Cluster Setup documentation guides you through the process of spinning up the VMs, connecting them together, and assigning roles. 20GB can be broken down nicely into a five node Stackato cluster of 4GB instances:
1 "Core" node running the base, primary, controller, and router roles. This is the node that exposes the API, a web interface, and a gateway to the applications from the outside world.
1 data service node running whichever built-in data services you would like to expose to users (e.g. Redis, Filesystem, MySQL, PosgreSQL, etc.).
3 "DEA" nodes where the application containers run.
You can tweak the memory allocation for the VMs as you see fit, but this is a good starting point. You may find after running Stackato for a while that you want more memory dedicated to data services, or that the Core node could be smaller (2GB is usually fine for this node).
The default settings for DEA reserve 20% of the system memory for the OS and Stackato processes (base, fence, docker, et al.) leaving 3120MB free for user applications per DEA node - just under 10GB for the whole cluster. Though this may seem like a lot of system overhead, a cluster of this size is where we start to see the memory efficiency advantage of running applications in containers vs. running each application on its own VM.
Especially when running applications with a relatively small memory footprint (say 256MB or less), we can host a lot more applications per GB of RAM since the application containers are sharing resources rather than requiring a whole operating system for each app.
More importantly though, you have a simple interface and API that you can expose to developers for self-service deployment.
A cluster of this size would suffice as a rapid application development platform for a small team of 5-10 people, hosting potentially dozens of applications. Alternatively, you could use it to host just a few applications under heavier load, taking advantage of Stackato's dynamic scalability and fault tolerance features. Maybe a bit of both.
BYOI - Bring Your Own Infrastructure
I've been driving our Marketing team crazy by insisting on caveats to our "free cluster" message. Just to be clear: Stackato is software. You'll need some kind of virtualized infrastructure or cloud hosting provider to run it, and somewhere along the line you pay for that part.
That said, organizations often already have virtualized infrastructure at their disposal, they just don't have a simple way for developers to use it safely and effectively. Stackato's premise is that private Platform-as-a-Service is the best way to fully utilize that infrastructure.
If our theory is correct, there will be a significant improvement in the efficiency of application hosting on that infrastructure, and an even bigger improvement in the speed with which software moves from development into production.
Our Stackato Enterprise customers are proving this in their own organizations, but now it's time to prove it to the wider world. Having more Stackato clusters in the wild, serving real applications to real users, will validate the adoption of a private PaaS layer in the cloud computing stack.
So get hold of the Stackato VM, build your cluster, and let us know if we're right.- See more at: http://www.activestate.com/blog/2014/08/micro-cloud-micro-cluster#sthash.lbpKVJ4P.dpuf
Published at DZone with permission of Troy Topnik , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.