In Part 2, Peter described optimization of infrastructure for green computing and how Windows Azure is different from other clouds. In this final part, Peter looks in detail at the financial benefits of cloud computing and the change in mindset and models when it comes to developing for the cloud.
Financial benefits of cloud computing
Energy savings and green IT are great eye catcher topics these days. But ultimately we are all in the business for the money. The greenness of a solution should never be the first marketing button you press as the typical CTO will stop listening to you if he doesn’t hear the magical words TCO, CapEx and OpEx. While making your IT green is a noble feature, also keep it as second or third argument in your reasoning chain. Cash is still king! So let’s take a look at how cloud computing can help improve the financial situation directly and more obviously than through long term hardware and energy savings.
Currently the operation of enterprise software is high on capital expenditure (CapEx) – you have to pay upfront for data center building, infrastructure, servers, and software. This is not easy to do and usually very hard to get funding for. Cloud computing allows you to move a big part of this costs from CapEx to operational expenditure (OpEx). Instead of paying a big sum upfront and then writing of amortizations over a couple of years you will pay an annual operational fee.
Figure 4 shows this move from CapEx to OpEx and how it benefits the enterprise from the view of the software vendor.
Figure 4: Selfhosting vs. Utility Computing Expenditures – Software Vendor View
There are certain costs that are not going to go away easily. You will still have a certain amount of IT assets and of course your own product assets. For example some internal servers, workstations, developer machines, 3rd party software etc. The parts that are interesting to change are the data center costs. If you run your own data center to host the software solution that are selling you have to pay for your assets (servers, buildings, power distribution and network components) and you have the operational expenses. Sure, there are ways of getting rid of most of the assets by simple accounting tricks like lease-back-sales. But those are not changing the amount of money spend in the long run and usually incur higher costs, just distributed over a larger period of time.
By moving your software solution into a cloud computing data center you are completely removing the costs for data center assets and operations and instead pay a usage expense for the service. This has two important benefits:
- Expenses for operating your solution are moved mostly from CapEx to OpEx and therefore easier manageable, easier to fund and more likely to be approved.
- Expenses are now directly usage related. Instead of having a fixed amortization and operational cost you pay for your service only as much as your customers use it. You don’t need to plan your hardware for maximum success anymore, but can scale the resources dynamically based on the demand from your customers.
Especially the second point is important for the typical Web 2.0 company as by the nature of the organization and business, as the optimal success case for a Web 2.0 app is that it goes viral and spreads quickly. Which means that you can run into one of two scenarios that will hurt your business badly: Either you have bought enough servers to handle the load then you will have a huge amount of wasted resources once the initial stream of new users ebbs down or, even worse, you don’t have enough resources and users will experience outages of your service. This would create irreversible damage to your reputation and therefore to your business and probably costing you even more money than the hardware for the maximum load. With flexible cloud computing you can scale your computing resources up and down as needed and always pay only for the part that you actually use.
Now let’s take a look at how the CapEx and OpEx picture changes for the customer of a software vendor that moved his application to the cloud.
We are also looking at two different scenarios here (see figure 5 for a graphical representation):
- A utility computing scenario with significant portions of the solution moved to the cloud but some local components.
- Pure Software as a Service (SaaS) play with the whole solution in the cloud.
Figure 5: Utility Compute vs. pure SaaS Expenditure - Enterprise Customer View
In the first scenario the enterprise is significantly reducing its IT assets and the associated amortizations and operational expenses. By consuming a service from the cloud it doesn’t need to acquire the assets for running the solution locally and also saves on the operation costs of this solution. Again upfront CapEx is moved to ongoing OpEx.
There will still be significant investments in solution assets and associated amortizations as the enterprise needs to buy the solution itself, which is usually done through one big upfront payment and it will need a certain amount of IT assets to run the non-cloud solution items locally. This situation clearly can still be improved from a pure technical view, although many times due to performance, amount of data processed or simply legal reasons it is not possible to move towards a higher optimized solution.
A solution with all server based items running as a Software-as-a-Service or Software+Service model in the cloud can significantly improve the cost function of the solution. All that is needed for the enterprise now are the IT assets for the local workstations and the client software for accessing the cloud based service. This would be typically a web browser, but can also be a more sophisticated fat client application build specifically for consuming cloud services. In that case we refer to it as Software + Service instead of Software as a Service.
Without any local server solution items we completely eliminate the need for running a data center and therefore basically remove any cost items that are associated with it. No more expensive solution items or depreciating IT assets. Also the operations costs are significantly removed by only having to maintain the client workstations and not the servers.
Of course these types of solutions will most likely increase the cost of the online service as it now covers the whole solution instead of just a part. But again this cost will be dynamic, based on the load that the application experiences and also distributed over time. With this solution all data center related CapEx moved into OpEx.
Of course there are not only benefits to this. You must pay a lot of attention to the potential pitfalls when you investigate the cloud solutions offered to you. Typical problems include data lock-in with vendors that use proprietary data formats or programming languages, too small cloud computing companies with too much risk of going out of business and therefore no longer servicing the solution, cloud computing companies with the wrong set of skills and features for your specific application type, network bandwidth to move data between your premises and the cloud and many more.
Another, very important in planning your move to the cloud is an organizational aspect. Depending on how your organization is set up a move to the cloud might become a political nightmare for you if your organization is not properly aligned with this new strategy. If utility costs are currently absorbed by the overall corporate budget they might get moved to the IT budget and cause additional difficulties for the IT department in justifying the move to the new model. If done right then the new model will actually allow for a more direct correlation between software services consumed and money paid. Therefore governance and audit capabilities of the cloud providers’ infrastructure should play a very high role in your decisions.
Programming Models for the cloud
One very important aspect of evaluating whether your application should move to the cloud or not is, next to the business aspects, a crucial technical aspect: the programming model for the cloud. This is fundamentally different from programming models for local applications due to the inherent scale-out requirements.
Eric Brewer postulated the CAP Theorem in one of his talks1:
- Tolerance to network Partitions
In a cloud application you can achieve only a combination of two of the three items above. If you require consistency and availability your application can’t be tolerant to network partitions. If you need an application that is consistent and uses many partitions than it can’t be continuously available because it will have to wait for synchronization of its items; and an application that requires 100% availability and uses many partitions can’t be all the time perfectly consistent.
This leads to the theorem of BASE:
- Basically Available
- Soft state
- Eventual consistency
This is in strong contrast to the widely accepted theorem of ACID (atomic, consistent, isolated, and durable). ACID promotes strong consistency through isolating actions and using pessimistic locking on items. With BASE some inconsistency is ok as there might be situations where stale data is not massively negatively influencing your application. Availability is the first concern. Your application has to be always available and performing formidably. Therefore you can’t afford pessimistic locking. BASE is basically representing a best effort to provide consistent data as opposed to the guarantee to have consistent data by committing or rolling back transactions. If something fails in BASE you need to have compensating actions because typically there will be no way of simply rolling it back to the state before the action as the system might have already changed at another endpoint.
While ACID provides a very conservative and rigid system which is usually quite difficult to move forward and evolve into a new version (e.g. schema evolution) BASE provides a much more flexible and agile approach that allows for continuous evolution of the solution.
In most cases you will not see a waterfall release model in cloud based services, but more of an organic and evolutionary approach to development and introduction of new features. Depending on the needs of the client you will have to find the right mix of guaranteed, fixed feature sets and very agile applications.
Even in difficult economic environments like the current one, working on the issues of energy use and environmental change provides an opportunity to make a difference in the world. It’s the right thing to do! Oh, and by the way, you can also make money by doing it!
This paper highlights how you can build a sustainable economic advantage by making your software solution more “green” as the world transitions to new ways of energy usage and managing natural resources you can be ready to be on the forefront of GreenIT initiatives to secure not only your company’s future, but also make sure that future generations will have a healthy living environment. Today you have the opportunity to use software to help eliminate greenhouse gas emissions associated with IT, which makes up an astonishing 2% of all global electricity production.
With energy efficiency gains, the IT industry can dramatically increase computing productivity without increasing the amount of energy consumed by computers. Most of this productivity gain is not within the range of a single device. As in many areas of our lives the solution lies in a collaborative use of resources. Cloud computing is far more than just the sum of its parts. It’s a new era in computing. Enabling unprecedented improvements on the efficiency and availability of today’s computing power.
Good use of cloud computing will effectively enable you to turn GREEN into GOLD.