Rethinking Database TCO
Indirect costs make total cost of ownership more complicated than it was. Ensure you take flexibility, standards support, and performance limits into account.
Join the DZone community and get the full member experience.Join For Free
How much do we need to care about database costs? The database is little more than a cupboard that you throw data into and aren’t all databases nowadays just more of the same? Don’t they cost pretty much the same no matter which you choose?
In reality, costs vary considerably, and the choice is broad. There’s more choice than there ever used to be, as many new database products have emerged in recent years. So once the developers have selected which products fit the context of the application being built, it only makes sense to determine the total cost of ownership (TCO) of the alternatives.
Technically, the TCO – an idea borrowed from management accounting — is a financial estimate intended to help buyers calculate the direct and indirect costs of a product or system. Direct costs are relatively easy to calculate, but as far as I am aware, there is no precise method for determining the indirect costs. That, as they say, is where the bodies are buried.
Let’s begin by considering the direct costs. There are license and support costs of some kind. There are the infrastructure costs of deploying the database and there may be significant training or hiring costs. Once you’ve paid those costs, developers build the applications and the database does what is asked of it, hopefully. If the application and database are implemented in the cloud, most of the direct costs are transformed into a regular rental. It makes the mathematics easier no doubt and if you choose not to investigate the indirect costs of a database then you might proudly claim that the TCO is the monthly cloud rental plus the costs of hiring and training. But that would be way too simplistic.
The Indirect Costs
All of the indirect costs are business costs with business implications, although some have a technical aspect to them, which we’ll refer to as we list them. They are as follows:
Vendor lock-in (flexibility of deployment). Nowadays almost all databases run on commodity Intel hardware which also means that they can run in the cloud. It is a significant drawback (and a concealed cost) where this is not the case. The cost occurs in the future when, for example, you want to move to other hardware or into the cloud or even to run an application as a service. Being tied to a particular cloud provider or high-powered hardware makes it infinitely more difficult to shave costs.
Limited use case applicability (standards support). There are database standards, particularly the standard language for querying (structured query language or SQL) and the way that transactions are handled. Surprisingly perhaps for those unfamiliar with the technical side of databases, not all products fully comply with such standards. The indirect cost of poor standards support is that the database cannot be used for some applications. A database might be good for analytics but not for transactions, for example. The result is that you will likely significantly increase the number and variety of databases you use.
Time to market/opportunity loss (ease of development). Not all databases are equal in this respect. With some, the developer may have a significant learning curve, and there is a definite cost to this – both in terms of labor (see below) and in terms of getting a product to market more quickly. However, this can be a double-edged sword. Databases that are more versatile and have greater functionality will sometimes involve a longer learning curve for developers. Sophisticated applications usually demand sophisticated databases. The trick is finding a database that balances functionality and future flexibility with the ability to translate existing skills and code into a new product faster than your competition.
Labor costs (ease of management). In addition to any specialized skills you might need to adjust for non-standard databases, databases need administration staff to monitor and manage them, both to tune the database so that its performance does not deteriorate over time and to manage their use of resources. Products vary. Some products require fairly constant DBA attention while others have automation facilities that result in minimal (almost zero) requirement and hence a much lower labor cost.
Customer attrition (performance limits). Like trucks and airplanes, databases can only go so fast given a particular load. A database that is too slow for the work it needs to do is worse than useless as it will need to be replaced or you’ll begin to suffer customer attrition. Databases that perform well under a wide range of workloads – and can adjust their ability to support customer growth or spikes – are inherently more useful and hence better value for money. Where a database offers a very broad range both in terms of data volumes and performance it is generally referred to as highly scalable.
More customer attrition (availability). If your customers can’t use your product when they want to, they’ll find another product. Availability is simply a measure of how often a database is out of commission due to failure of one kind or another. Where availability is inadequate for business requirements, the cost of downtime is the cost of the lost business caused by downtime. In an increasingly 24/7 world, this cost seems to rise each year for most businesses. For a large organization, the average cost of the failure of a business critical application is likely to be in the region of $1 million per hour. For a web-based business like, say Twitter or Facebook, it will be far greater than that.
Lack of flexibility (distributability). The ability of a database to be easily distributed across multiple data centers or between the cloud and a data center can be very important. Very few databases are well engineered for this kind of capability, but for those which are, the cost of delivering high availability and disaster recovery is much lower and the ability to migrate data or databases from one geographic location to another is also much lower. This can be particularly important for SaaS applications that run primarily in the cloud, but might need also need to run on premise.
The cost of weak database capability in some areas, particularly the last one mentioned, is difficult to put a price on. This brings us to the question of how to use this list.
Using This TCO List
The only indirect costs that matter are those that you will actually pay. If you simply want a place to put data and you have no need for some of the capabilities discussed above, then the direct database cost is all that matters. However as soon as you have a genuine requirement for some of the above capability, you need to consider the TCO according to what the database can actually do and how well it can do it. All of this is contextual.
However, you also need to have an eye for the strategic. For example, if you select an elastic SQL database like NuoDB, data mobility and relocation will never be a problem. The cost of living without that, if you really need it, escalates over time. This speaks to the fact that if you are making an important database decision, you need to consider the long-term role of the product rather than just the first few applications of it.
Published at DZone with permission of Robin Bloor, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.