Hate To Break It To You, But The Browser Is A Tier
Hate To Break It To You, But The Browser Is A Tier
Join the DZone community and get the full member experience.Join For Free
“3-tier architecture”. Those words sounded like hogwash to me as I sat in my cubicle 13 years ago, and they still sound the same to me now. The reason for this is mainly because I haven’t ever actually seen a 3-tier architecture. I have only seen 2-tier and n-tier architectures with a mish-mash of strange layers and pseudo-service components.
Before I say anything more, I want to get one thing out of the way. The rough difference between a layer and a tier is that a tier is the physical isolation of software components such that these components can be distributed over the network, for example, to other servers, whereas a layer is a logical grouping of software components such that they are organized in patterns that meet the particular domain of that level or depth of the application (i.e. UI versus business logic versus database, which might be managed via isolated projects or DLLs). If you disagree with what I just said, I suggest that you go Google it for yourself. Tonight I’m somewhat merging the two terms, speaking mainly of physical tiers but also noting that these tiers are logically grouped as layers for practical purposes in most computing situations.
There is a strange belief among some developers and their managers that web servers represent the UI tier (and layer) of a web application. I am not sure where this belief comes from, but I suspect it might have something to do with prior experience with less-mature web technologies where HTML was static and the browsing interface was more of a “dumb terminal” if you will. Lynx and IE2 and Netscape 2. These days, however, web sites can be, and often are—and in my opinion often should be—wholly performing all dynamic view-layer functions within the context of dynamic HTML and AJAX. In my opinion, a “good” web application should be built to execute entirely within the web browser, and the web server should only be used to access data and to perform proprietary business logic or proprietary business algorithms.
As you tear down the moving parts of a web server, et al, something begins to happen. Everything begins to become blazingly fast. Why? The more end points you have in the life cycle of an application session, the more I/O that must occur, and unpackaging and repackaging, and potential fail points. The more layers you add, the greater the struggle to keep up with the performance requirements and maintenance tasks becomes. Consider the possibility of rich business logic support within SQL Server stored procedures. Imagine all your business logic tucked away in stored procedures. It’s doable, folks, and a number of companies hire application developers who are SQL Server developers first and C# or web developers as an afterthought, where C# code is only used for UI. (I know this because I was interviewed by one such company not too long ago.) Every development team has a completely different perspective of how application development should work. Usually these differences are due to the differences in the business domains. Sometimes, however, the differences are due to variations in management experience (i.e. cluelessness).
One of the applications I work on—I won’t say whether it’s my day job or a side project being co-developed with a friend, or whatever, but let’s just say that tonight as with most of this year since I heard about these circumstances they have had me flabbergasted—is a painful 5-layer, 4-tier architecture, for a small-to-medium sized application. The 5 layers consist of browser UI, server-side app logic, a DAL accessible via WCF web services, CRUD sprocs which are the only means to DB data, and then finally the DB data. (Plus some Windows services, each with multiple tiers.) It just grew this year to 4 tiers because of a belief by a “privacy committee” that the web server, being “accessed by the customer”, needed another layer and tier to sit between the web server and the database where other customers’ data is stored. The application literally could not be migrated to new servers and be given access to the application’s own database until it met this requirement.
I won’t get into how painful it was to inject another tier—the months one developer took just to get the DAL to be proxied via WCF to the isolated web services servers. Nor will I blabber on about the complaining by the leaders who seem to want to blame the developers for wasting time with a server upgrade when there were no performance improvements, while at the same time insisting that this switch to a multi-tier architecture was somehow a good thing for the application. Nor that they had the nerve to keep calling this “3-tier” when it is clearly at least four tiers.
Introducing tiers to a web application is immensely costly to the performance of an app. If n-tier architectures mean “scaling out”, let’s just put it this way: “Scaling out” a legacy application that wasn’t built to scale out is a computer science process that is extremely hard and has no business being mandated by people who are not experienced software developers at their core unless they are willing to let go of at least six to twelve months for a rewrite. The non-leader developers in this case only had the authority and permission to patch the existing codebase in order to meet the requirements—getting the data proxied. In order to correctly build an n-tier, multi-layered application, you have to design much of it as such from the ground up. Most architectures that work best written one way in a two or three tier configuration will not scale well at all in an n-tier setting. And this proves true in this case, as some features in the app I’m referring to now take minutes to perform what used to take a few seconds to perform, because the data must undergo an extra hop via unoptimized XML and the leadership refuses to allow for binary+TCP serialization.
This is my challenge to you, the reader, who for all I know is only myself, it is a challenge nonetheless. Create a rich web application consisting of nothing but static HTML and script. Mock some web services by using static XML or JS(ON) files. Create unit tests for all business and data points. At the end of it all, ask yourself, “Why do I need RoR/ASP.NET/PHP?” I’m sure that the answer will not be, “I don’t,” but the point of this exercise is to prove out the significance and self-dependence of the browser as a tier and the power of client-side application programming for the web. Doing this should actually help, too, with server-side unit testability. Web scripting languages and ASP.NET are prone to be painfully untestable because they really require a browser to render the results before any functionality can be monitored, but if all of the server-side operations have stripped out rendering logic then unit testing the business logic, which is all that’s left, is a cakewalk.
This exercise can also prove out the importance in the value (and pay scale) of strong front-end developers. As one who has sought hard to be strong on all the layers of web development (front-end, middle-tier, and DB), it disgusts me and makes me sick to my stomach when I hear a boss shrug off a resume for being strong on the front-end, or praise a resume for lacking front-end talent. (I am not speaking of my own resume, by the way; let’s just say that I am within earshot of occasional interviews for other teams.) The truth is that every layer and every tier is important and can have equal impact on the success of an application. Leaders need to learn and understand this, as well as the costs and frustrations of adding more of these tiers and layers to an existing architecture.
Published at DZone with permission of Jon Davis , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.