Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

How Performance Has Changed

DZone's Guide to

How Performance Has Changed

There's a proliferation of tools that can be used for monitoring and optimization, but there's still not a holistic solution.

· Performance Zone
Free Resource

To gather insights on the state of performance optimization and monitoring today, we spoke to 12 executives from 11 companies that provide performance optimization and monitoring solutions for their clients.

We asked these 12 executives, "How has performance optimization and monitoring changed since you began working on it?" Here's what they told us:

We've moved from an enterprise environment without a steady emphasis on end-user experience and service quality to a code and execute environment with monitoring system-level performance. Now, we can change the nature of the code and see how well it performs in a live application environment. If it’s not legacy, you can change the code or architecture to mitigate problems. Shift from how well you're performing at one point to considering performance throughout the usage lifecycle.

There's been an increase in the diversity of the infrastructure and application architecture. There are multiple layers and tiers with an increased need for visibility, and they're more granular with multiple variabilities and sources.

Shift to SaaS. Cloud-based vendors are constantly improving. There’s a proliferation of tools — no longer one tool for Java. Many different languages — no uniform monolith — use trace ID to monitor flow through the system. There's more sophistication and we've moved away from simplistic thresholds to exponentially weighted moving averages. Performance data is not uniform (i.e., order velocity varies by time of day). We can now handle these variables. Watch for more useful metrics, trying to identify what are the keys to success. They vary by business.

We see a lot morereduction in response and page load time. There are more content delivery networks (CDNs), refactored data centers, and cloud services with a desire to bring together a unified view of the application’s structure and delivery. What is the health of the application from inception to the end user? Most clients are using one monitoring tool for mobile, one for non-mobile, and one for infrastructure on top of that plus the actual app delivery and end user experience. CDN’s claim to be able to reduce response time by 50% but you need to be able to measure that to ensure they’re delivering what they promised.

Virtualization and microservices architecture are important. Virtualization means thinking differently about how to scale, making your app portable, and having the ability to upgrade. Microservices architecture means making your software up out of small components. Having different microservices for the front-end, the back-end, web services, and connections to devices for each protocol enables you to tune the type of machine based on which resources are constrained.

We’ve gone from performance monitoring to performance intelligence. APM is now going beyond graphs and metrics to intelligence. There’s still a lot of opportunity for improvement.

During the project lifecycle, we’ve moved from addressing pain points in service code to optimizing the whole infrastructure and interop between services. From the very start, we had clear failing places that obviously needed optimization. Later, we couldn’t improve without making changes to the infrastructure.

The addition ofeasy-to-understand, context-relevant, algorithmically driven performance analytics has been key. They involve the resultant and decision support derived from the collective best practices of our enterprise customer base. This complements real-time reporting and dashboard capabilities. Our expanding catalog of applied analytics enables IT agility, as teams can proactively optimize the infrastructure for perpetually changing and complex workload demands and identify emergent issues before they become performance-impacting problems — well before end-users are affected.

I was one of the original members of the IRON sheep team, and performance optimization and monitoring weren't priorities in our first projects. Now, we have many rounds of load testing on several projects and optimized accordingly. We also set up the monitoring tools for these projects. So, I would say the performance optimization and monitoring changed dramatically since I started working at the company.

On one hand, tools have gradually become a lot better in that you can get deep insights to all application layers. There are good tools that have reached a high level of maturity, such as NewRelic. On the other hand, new popular stacks are making way to young and immature monitoring tools. The proliferation and use of containers and Golang have put performance monitoring for many developers a couple of years back in comparison to monitoring Java-based applications.

It has definitely evolved. We certainly stepped into it thinking we should measure just about everything. However, what we realized is that while measuring everything is great, the list of most impactful items was considerably smaller. So, we have asked the teams to get those items first and to review this quarterly in our BU meetings to see how things are going.

What have been the most significant changes from your perspective?

By the way, here's who we spoke to!

Topics:
performance ,monitoring ,optimization

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}