Improving Performance in Enterprise Web Applications
Improving Performance in Enterprise Web Applications
These tips will help you optimize the performance of your enterprise web applications to create a better experience for your users.
Join the DZone community and get the full member experience.Join For Free
Sensu is an open source monitoring event pipeline. Try it today.
Every team that builds a large web application can generally pick from the following: delivering application functionality on time, with high quality, or high performance. Teams can pick one or two of the options, but they can't pick all three.
Most teams opt to only focus on performance if and when it becomes a problem. This, unfortunately, can be far too late for some projects. Anyone who has been in the industry can empathize with both sides of the equation - choosing to defer performance concerns, as well as seeing the negative impact it can have on the success of the product as a whole.
It is a lesson I've learned from hard experience, so I want to make sure others can learn from my mistakes. In this post, I suggest a handful of principles that help to find a happy medium for delivering high-quality software applications while focusing on performance.
Significant improvements can be realized even if only one or two of the principles are applied. Applying all of them, of course, will produce the best results.
Make Performance a Priority
The single most important thing leadership can do for a product is to make performance a priority. Not necessarily the priority, just a priority. Every product, project, team, team member, and even sprint have their own lists of priorities. With so many lists and so many opinions, it is important that the leadership team makes it known where performance stands with all of the other things the team must accomplish.
To use a Manager Tools analogy, think of all of the actions a team must accomplish during a sprint and balls they must juggle. Some team members can juggle many smaller actions, while others prefer to juggle just one important action. By adding in another ball the team must juggle, there will invariably be other balls that need to be dropped. Making sure the "right" ball gets dropped is tough.
Team members themselves know best the impact of what can be omitted and what is truly crucial to their success. An open and honest relationship full of trust is critical to ensuring that, when priorities get shuffled, the team starts focusing on the more important things.
Know Your Users
Software development does not happen in a vacuum, especially enterprise web applications. Decisions are ideally made to help the application better serve the short-term and long-term vision of the users.
This is an important distinction to make — decisions are made on behalf of users. Leadership teams are representing the user's interests, and the decisions they make should not be made lightly.
What typically happens is that new features equate to a higher probability of new sources of revenue, whereas enhancements or performance tweaks are moved down on the priority list. These decisions make total sense until users begin to clamor that they can't effectively work with how slow the application is moving.
Making performance a priority allows for honest conversations on how important performance is to the workings of the end users. The best outcome of this conversation is to have the development team-everyone from those that code to those that test the quality of the code-understand the end users. Not what the objectives of the business are; that is the responsibility of a Business Analyst. Not the quarterly targets across the verticals; that is the responsibility of the Product Owner.
What the development team needs to understand is what the end users actually do. Start off with the simple things:
- Do the end users use their laptops or desktops?
- Do they have external monitors, and if so how many?
- How old are the computers?
- Is the Internet connection constant, or do they go out into the field and use a VPN to connect?
- Do end users work 9 to 5? Is there a typical time of day when they're on the application and it becomes unusable?
- How are users using the application in ways the development team did not intend for them to use it?
- Are the answers the same for all of the users?
Those simple questions start to uncover a world of previously-unknown insights. These crucial questions help bridge the gap between those who make and those who use. After all, they are called users for a reason.
Empathy ensures that the development team takes the performance of the application as seriously as if someone they know uses it. It gives a new focus to the Value Statement in a User Story.
The best and most-widely documented SLAs in the world won't be able to compare to the name of someone who uses the software you create. The performance moves from an item on a checklist to something the development team takes personally. They don't write software in a vacuum after all.
Weighted Scientific Method
Now that performance is a priority, and the development team takes the priority personally, knowing where to start attacking is the next logical step. At this point, it can be easy to make a misstep and focus on optimizations that do not benefit the end users in a meaningful way. It is easy to quote articles about moving all
< SCRIPT > tags to the bottom of the
< BODY > as a way to speed up the loading of the page, though this only tends to matter if using server-side rendered HTML. It is easy to mark use of
It is hard, however, to know what the right thing to focus on is. Time is the single resource that is at a premium, nonrenewable, and is needed to accomplish everything in software development. The number of things a team can work on in a sprint is dictated ultimately by time, which is why it is so important to find the right thing to work on to help with the performance of the application.
The right thing to work on is finding the sweet spot between what leadership thinks is right as well as the users themselves. This will have the highest weight out of anything else the team can work on. The next thing to do is to list out everything else in order of the pain and business value.
The business value comes not just from what the Business Analyst knows about the future of the application within the organization, it also comes from the number of changes the development team needs to make to address the issue. The ratio of how many lines need to be changed, against how much quality testing is needed, against how much of a performance improvement, should be the grade of every proposed changed.
Identifying those changes is the tricky part. That is where the scientific method comes in. Something so simple, something in the room of every elementary school science class, can have a profound impact on how software is developed.
Rather than aimlessly guessing which changes have the most impact, the development team should focus on developing a hypothesis, implementing it, observing the results, then iterating. The quicker the team can get from hypothesis to iteration, the better the results. No ideas are taboo.
Minimizing Out-of-Proc Calls
There are a few key things that separate good applications from great ones. Most of them have the same nuts and bolts, but there tend to be a few things that help applications crest to be great, especially when it comes to performance. Not surprisingly, it is something that computer scientists have known for decades: minimizing out of process calls maximizes performance.
When you consider how to obtain the data necessary to complete an operation, there are three grades: fast, slow, and worst. Code that gets its data from memory on the CPU itself is fast, from the disk is slow, and from another system like a database is the worst. Minimizing the number of hits the code does against another system, especially a database, tends to have the highest ROI in terms of fewest code changes with the best performance gains.
If the code is doing database queries inside of a
for loop, it will be painful. If the UI is making a dozen API calls, and each of those API calls is doing a single database query, there are 24 round trips to remote systems that the end user ultimately has to pay the penalty on.
Work on ways to cut the number of API calls in half then cut the number of database calls in half. The team should keep iterating until the cost outweighs the benefit, then move on to the next feature to optimize the performance.
Keep in mind, caching data is another form of minimizing out-of-proc calls. Knowing when to invalidate the cache is one of the hardest problems in computer science, so caching should be avoided until it is absolutely needed.
Final Thoughts: Iterate in the Future
The team should first work on features that already have performance problems before focusing on new or in-flight features. Every technique, every method that the team discovers, should start a living catalog of performance best practices. It is difficult to say what will and will not work on a given application. The users are always different, the needs are always different, and the features are always different.
The teams that develop the features have their own strengths and weaknesses. However, the teams that develop the applications know their application the best. If they know the users, make it a priority, and continuously iterate on what works, performance will invariably improve.
I suggest you implement these principles in your enterprise application development. They are, however, something that cannot be a focus for just a sprint or even a quarter. They need to be woven into the fabric of every story, every task, and everything the team does.
Published at DZone with permission of Zach Gardner , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.