Performance and monitoring is a necessary component of the software development lifecycle. Developers, engineers, and architects continually review and analyze data from application performance monitoring tools to ensure software is running in tip-top shape.
This requires performance-focused developers to be critical problem solvers, excellent communicators, and good team players. It also requires them to understand more than just performance and monitoring, needing to know cloud principles, SDLC, security requirements, and integration techniques.
To help you understand this unique developer group, we surveyed the DZone audience to learn more about out their work habits, preferences on tools, and how they stay current on the industry. The full survey results will be published in our upcoming Guide to Performance: Optimization and Monitoring, out on March 27, 2017.
- 62.5% say Java is their primary coding language, followed by C# at 10.9%
- 30.6% say their immediate team size is between 2-5 people
- Primary job roles – 30% are developers/engineers, 21.4% are developer team leads, and 19.7% are architects
- 82% say they are currently developing software for web applications / services (SaaS)
- And over half (54.7%) have 10-20 years of IT experience
Another interesting tidbit about performance monitoring professionals: They love to learn!
According to our survey, 90.4% said they read articles on tech sites to keep their skills up to date. And 71.1% said they participate in online trainings and classes.
This group is knowledge-hungry, actively looking for ways to hone their skills as programmers and developers.
Then we asked questions related to their work habits: which areas of their technology stack have performance issues, how often they solve performance issues, how long it takes, etc.
Technology stacks performance issues:
- When it comes to technology stacks, the area with the most issues, whether they happen frequently or just sometimes, is application code.
- 26.3% say issues happen frequently
- 14.1% say issues happen sometimes
- Technology stacks that hardly have any issues: hardware (12.6%) and filesystems (13.3%)
Solving performance issues:
When was the last time devs had to solve a performance issue in their infrastructure? 14.9% said this week! However, 10.1% said they haven’t had to solve an issue for over a year.
Average time to solve performance issues:
From the moment a problem is detected to when a fix is committed in production, what is the average amount of time it takes to solve a performance-related issue?
We broke it down in hours and days.
- 2 days – 18.9%
- 1 day – 16.3%
- 5 days – 16
- 8 hours – 15%
- 4 hours – 14.4%
- Under 1 hour – 10.5%
Most time-consuming part of fixing a performance issue:
According to our survey, the most time-consuming part of fixing a performance issue is finding the root cause of the issue (32%).
What else lengthens the performance-solving process? Collecting and interpreting various metrics, figuring out the solution to the issue, and communicating / managing people to address the issue.
We also asked questions related to their tool preferences: what tools are commonly used to find root cause problems, how many performance-related alerts do you receive, which tests / monitoring types do you use, etc.
Tools used to find root cause problems:
Devs could select all the tools they use to find the root cause of performance-related issues, and the most popular tool by far is application logs – 89.9%.
Other tools include:
- Database logs – 69.8%
- Profilers – 63.3%
- Debuggers – 59.1%
Number of performance-related alerts:
On average, how many alerts do devs receive each day from their monitoring tools? 33.8% said they receive 0-1 alert each day from their monitoring tools.
About 7.4% said they receive 10 alerts a day, 12.2% said they receive 5 per day, and 3.8% said they get 50.
Performance tests and monitoring types used:
Programmers and architects use many different tests and monitoring types. According to our survey, load tests (66.9%), log management/analysis (65%), and website speed tests (53%) are the most popular tests and monitoring types used.
They also said:
- Smoke tests – 41.9%
- Real user monitoring (RUM) – 33.8%
- Business transaction monitoring – 25.8%
The full survey results will be published on March 27, 2017 in the 2017 Guide to Performance: Optimization and Monitoring.
Performance Devs on Dzone
To get a little more perspective on our Performance audience, we dove into our database and found:
- 86% of Performance readers are male and 14% are female
- 45% are between the ages of 25-34
- They prefer to read opinion, tutorial, and research based articles
- Are located in Brazil, Australia, Canada, Germany, India, United States, and Netherlands
- And are movie lovers, news junkies, social media fans, technophiles, and sports fans
The Future of Performance and Monitoring, According to Execs
DZone Research Analyst, Tom Smith, spoke with executives from performance and monitoring companies to understand the opportunities, challenges, concerns, and changes happening in the industry
Takeaways from his interviews:
- Incomplete visibility throughout the pipeline is a problem. Monitoring isn’t a priority until it’s a problem, causing you to be reactive instead of proactive.
- Design applications with higher level programming and better tools.
- Virtualization and microservices architecture are important.
- The ability to understand as quickly as possible why performance issues happen is important.
Smith published insights from his interviews in the Performance Zone. You can find each of the articles below:
You can also find Smith’s insight and executive research in the 2017 Guide to Performance: Optimization and Monitoring, out on March 27, 2017.
Get in front of this audience by sponsoring:
DZone's Guide to Performance:
Optimization and Monitoring
- Original Industry Research Data
- Articles Written by Industry Experts
- Performance Optimization Infographic
- Directory of the Best Tools & Solutions