Is Business Logic at Home in the Database?
The following are the three major reasons why business logic is best implemented in the application layer.
Join the DZone community and get the full member experience.Join For Free
No, this is not a joke post. Bear with me, and I will explain under what circumstances it makes sense to move your number crunching business logic from within your Oracle, IBM DB2, MSSQL Server, and MySQL stored procedures to a scalable application level.
In my opinion, these are the three major reasons why business logic is best implemented in the application layer. Further reasons can be found in the comprehensive blog, Stored Procedures are EVIL, written by Tony Marston.
Scalability and Performance
If all business logic is performed within the databases in the form of stored procedures, then the database becomes the bottleneck. Once the load starts to increase, there is a corresponding decrease in performance. The execution of a single stored procedure may be faster than executing the equivalent logic with the application code. However, as stored procedures are executed on the database server, the application will be limited by the processing power of the database server.
A remedy is to use horizontal scaling and to physically separate the processing tasks where the performance decrease is experienced from the database tasks. This allows for the graceful scaling of the application layer independently from the data layer. An object-relational mapping (ORM) layer, implemented by tools such as Hibernate or Oracle TopLink, can be used as the glue between the relational data and object application layers. This post explains horizontal scaling for Web 2.0.
Vertical scaling can also be used, but this is a far more complex and costly. It is a short term solution akin to simply throwing greater horsepower at the problem without addressing the underlying causes.
Many large companies do not predict change and are very slow to react to external forces. This is particularly evident when viewing IT infrastructure. Companies are still using legacy systems from 30 – 40 years ago, mainly because they are suffering from vendor lock-in. This is a particular problem for databases, as each vendor has its own proprietary stored procedure language and functionality. For example, Oracle 11g PL/SQL and MSSQL Server T-SQL.
You may think that this is not a problem if you build and maintain the databases for a single company, where a change in the databases vendor takes years to be realized. But what happens when your database is suddenly not the flavor of the month and management is demanding change? This might be for various factors, such as spiraling costs or poor performance. When it happens, you will find that there will be an awful lot of code to rewrite. Migrating the data is one thing, but porting the stored procedures, functions, and triggers is a whole bigger story!
Now, if all the logic was held within the applications, then imagine how much simpler it would be to move within the database vendor ecology, allowing the selection of the best database based on your current needs, not the needs of 10 - 20 years ago.
Stored procedures form an API by themselves. Changing an API is to be avoided, as it requires updating the client code that uses it. This means development time and money. It is common that when a table or the behavior of a stored procedure changes, then a new stored procedure is added. At the start, this does not sound much like a problem. However, when you are dealing with a large scale legacy system of 30 years old, then you end up with what can only be described as a spaghetti junction of procedures reaching in to the millions of lines of code. In affect, the code base becomes a quagmire for developers to work with and results in ever decreasing performance, maintainability, and extensibility. This impacts business costs directly.
Opinions expressed by DZone contributors are their own.