Executive Insights: Performance Optimization and Monitoring
Executive Insights: Performance Optimization and Monitoring
Use real-time user monitoring to provide visibility into the entire pipeline to ensure an optimal user experience with video, applications, and web pages.
Join the DZone community and get the full member experience.Join For Free
Maintain Application Performance with real-time monitoring and instrumentation for any application. Learn More!
To gather insights on the state of performance optimization and monitoring today, we spoke to 12 executives from 11 companies that provide performance optimization and monitoring solutions for their clients.
Here’s who we spoke to:
Josh Gray, Chief Architect, Cedexis
Jeff Bishop, General Manager, ConnectWise Control
Bryan Jenks, CEO and Co-Founder, DropLit.io
Doru Parashiv, Co-Founder, IRON Sheep
Techyoav Landman, Co-Founder and CTO, JFrog
Jim Frey, V.P. Strategic Alliances, Kentik
Eric Sigler, Head of DevOps, PagerDuty
Nick Kephart, Senior Director Product Marketing, ThousandEyes
Kunal Agarwal, CEO, Unravel Data
Len Rosenthal, CMO, Virtual Instruments
Alex Rysenko, Lead Software Engineer, Waverly Software
Eugene Abramchuk, Sr. Performance Engineer, Waverly Software
Here are the key findings from the subjects we covered:
The keys to performance optimization and monitoring are the design infrastructure and real-time user monitoring (RUM) to ensure an optimal end-user experience (UX) whether it’s videos, webpages, or applications. The proliferation of new services, requirements, and devices in diverse geographic locations has made visibility into the entire network critical. You need to be able to see where all of your data is residing to understand how performance is, or is not, being optimized.
There’s a greater need for visibility, and there’s a proliferation of tools coming online to provide that visibility. However, no one has developed a single solution to provide a complete view across a diverse collection of infrastructures and application architectures. Response times and page-load times have continued to decrease with the adoption of virtualization and microservices. We’re evolving from performance monitoring to performance intelligence with the addition of easy-to-understand, contextually relevant, algorithmically-driven performance analytics. However, it’s important to identify and focus on key business metrics, or else you run the risk of being overwhelmed with data.
The most frequently mentioned performance and monitoring tools used are AppDynamics, New Relic, and DataDog. However, these were just three of more than 30 mentioned, with a trend towards more granular and specialized offerings, and respondents mentioning just a few solutions that came to mind besides their own.
Real-world problems that are being solved with performance optimization and monitoring are time to market, optimization of UX, and reduction in time to resolve issues through greater collaboration among teams. While more tools are coming online, some providers are enabling disparate tools to provide an integrated view to the client, which results in greater visibility into the entire pipeline and faster time to problem resolution. This visibility is also enabling clients to ensure service level agreements (SLAs) are being met by third-party providers.
Find this article and much more in...
Survey findings from over 500 developer responses
Articles written by top Performance experts
"What Ails You Application" Infographic
Directory of performance optimization and monitoring tools
Nonetheless, the most common issues continue to be the need to improve visibility, ease of use, performance, and knowledge of the impact that code has on the UX. Incomplete visibility throughout the pipeline prevents organizations from accurately finding the source of latency in the network, the application, or the endpoint. There continues to be a lack of knowledgeable professionals that know distributed computing and parallel processing. As such, technical complexity of these tools must be reduced for companies to get the most value from them. Vendors should also improve the ease of use through analytics so IT operations do less data interpretation and can focus more on remediation. Understanding the product, load, load tests, and performance graphs is critical. Several developers do not understand the performance impact of their code and they are not pre-optimizing their code, which can lead to less readable code with more complex bugs. Ensure that you talk to end users in order to understand what they are experiencing and what’s important to them. Do not assume you know what they want.
The biggest opportunities for improvement are the automatic reaction to, and correction of, issues and having more elegant, thoughtful design, and testing resulting in an optimal UX. In the future, performance and monitoring tools will automatically react to issues and know the difference between mitigating and fixing problems. They’ll be able to do this by collecting more data and identifying a dynamic system to determine what the problem may be before it affects the customer. Data will be more manageable with automated analysis. Application design will feature higher level programming, better tools, and graceful degradation. Just as data is used to solve problems, it can also be used to change the way performance testing is done and measured. All monitoring products will monitor across the hybrid data center, including on-premise and public cloud-deployed applications.
The biggest concerns about performance and monitoring today are the lack of collaboration, identification of KPIs and how to measure them, and expertise. Companies are not moving quickly enough to share and integrate different viewpoints. Smaller teams can implement more iterative solutions more quickly, which allows them to learn faster and observe how small optimization differences can have massive hardware implications. It’s important to identify and agree upon KPIs for each business unit, and how they will be measured. Premature optimization is a common pitfall in software development. It’s common to see software being developed without concern for consistency or use cases, which dramatically affect the quality and speed of the software.
The skills needed by developers to optimize application performance and monitoring are 1) understanding of the fundamentals; 2) understanding the concept of benchmarking and improving; and 3) staying creative. Have an authoritative understanding of the underlying IT infrastructure and the expertise to keep it running in the face of constant change, independent of vendors or location. Understand the architecture of the system, how services talk to each other, how the database is accessed, and how messages are read by concurrent consumers. Keep a broad perspective, an open mind, and an understanding of the needs and wants of the end user. Don’t assume the model you have in your mind is correct and know you’re going to get it wrong. Get used to designing in a way that makes it easy to make a few small changes than having to rebuild the entire application. Set a reliable benchmark for the performance goals that are relevant to your business application and work to improve on those goals as you get more information.
An additional consideration made by a few of our participants is the question of where performance monitoring begins and ends versus testing and validation. Once a problem is identified and remediation proposed, there is a need to test and validate that the change has completely fixed the problem. What effect will advancements in technologies such as AI, bots, BI, data analytics, ElasticSearch, natural language search, and new open source frameworks with standardized APIs have on performance and monitoring?
Let us know if you agree with their perspective or have answers to the questions they raised. We’d love to get your feedback.
Opinions expressed by DZone contributors are their own.