Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Skills Developers Need To Improve Performance Tuning and Testing

DZone's Guide to

Skills Developers Need To Improve Performance Tuning and Testing

Shift everything left in the SDLC they can to save themselves time down the road and to produce more reliable applications.

· Performance Zone ·
Free Resource

xMatters delivers integration-driven collaboration that relays data between systems, while engaging the right people to proactively resolve issues. Read the Monitoring in a Connected Enterprise whitepaper and learn about 3 tools for resolving incidents quickly.

To gather insights on the current and future state of Performance Testing and Tuning, we talked to 14 executives involved in performance testing and tuning. We asked them,"What skills do developers need to ensure their code and applications perform well with performance testing?" Here's what they told us: 

  • Shift left. Understand what you can and cannot shift left. You cannot shift load testing. Leverage capable lab space to replicate user conditions. Focus on achieving value by shifting left.
  • Don’t believe in magic. Understand what’s going on underneath the abstraction layer. Understand what’s happening with the database when performing critical tasks. Understand how the database integrates with the app.
  • Developers need to acquire “functionality first, scalability, a very close second” thinking. Even when creating a simple new feature, it’s better to think ahead, in order to not being in a position of rewriting the feature soon after introducing it. Take your time and make it happen once.
  • Understand what your workloads are. Application engineering has the biggest need for this. Look at real end-to-end transactions and how to have more predictable latency without negatively impacting either. Think about how to optimize for every release cycle by understanding the compute layer and all endpoints. Think about if the workloads grow by X will the architecture be able to support it.
  • Be familiar with APM, synthetic, coding centered languages. The whole DevOps culture mentality is a big deal. It is imperative to instrument everything and collect metrics aggressively – operations factors and everything. Instrumentation is as important as the application code itself.
  • What is the stress level for the apps and how do you handle failures? It’s all about scalability and knowing whether or not the application can scale and handle stress. Build to be resilient to failure. Monitor knowing things will fail. Know how to handle failure and how to communicate so the failure has the least impact.
  • Know microservices architecture to make robust, fast to release, scalable apps. When writing code, walk away and think again about how to do it differently. Challenge your implicit assumptions. Pair programming. Stop assuming what the user wants and ask and observe them.
  • Think about how users will be accessing and using the application or service. Be flexible and willing to adjust to the constantly shifting tech landscape. The tools and processes used today may not be valid for the technology of tomorrow.
  • Demand that your organization build performance testing into CI/CD for throughput and resource consumption. There are extensions to modern IDEs providing basic performance feedback. Understand the features you must meet for certain performance criteria. Locally monitor different behavior for each code change. This should be fully automated into the pipeline.
  • Check the CPU and memory for every piece of code to ensure you are making your apps as lean as possible. You cannot anticipate bursts. You will have latency issues if you do not code efficiently.
  • Developers should absolutely learn about visual synthetic monitoring. It will reframe their mindset and help them realize that performance should be measured in more ways than simply CPU usage or disk IOPs. In this day and age, what the user experiences and perceives as performance is paramount.
  • The first is having the right attitude and defining metrics for their particular application that determines if it is performing well or not. Too often, this is dealt with in an ad-hoc manner, particularly when the priority is developing new functionality. Once that it in place, there should be a formalized procedure for continual testing to narrow down the window during which performance regressions are introduced. This does not require specialized knowledge and just knowing there is a performance problem introduced between two dates may be enough for a developer to narrow down the problematic area. This would cover a large part of the problem at a relatively low investment. Anything after that is a long road. It relies on learning each individual monitoring and analysis tool as well as practical methodologies for conducting performance analysis, isolating problems or even robustly determining if there is a performance gain or regression in a workload. It takes time, practice and patience and a combination of a lot of different skills.
  • However, a detailed analysis is also something that can be conducted by a limited set of people who then inform a team that a problem is in their specific area. Of course, it also helps if developers have a solid understanding of how the hardware works and how their software interacts with both the operating system and the hardware. Even with a solid understanding of data structures, algorithms and design patterns for different classes of problems, it's easy to get blind-sided by some implementation detail of the OS or the hardware. Of course, I understand that people do not always have sufficient time to invest in this knowledge but the right tooling can at least highlight the problem if this situation occurs.
  • I would encourage developers to not only write code but understand the nature of the platform they are ultimately deploying code to in production. Even if a developer does not have production access, virtual machines or containerized environments make development environments one step closer to production in terms of configuration, if not size. The flip side of that is enabling development teams to replicate production performance defects or issues in relative safety. Nothing beats understanding complex system performance by being able to observe it directly in production, or production-like environments.

Here’s who we spoke to:

3 Steps to Monitoring in a Connected Enterprise. Check out xMatters.

Topics:
performance testing and tuning

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}