Concerns About Performance and Monitoring
Concerns About Performance and Monitoring
Lack of collaboration, identification of KPIs and how to measure them, and expertise are among the most common concerns when it comes to performance and monitoring.
Join the DZone community and get the full member experience.Join For Free
Container Monitoring and Management eBook: Read about the new realities of containerization.
To gather insights on the state of performance optimization and monitoring today, we spoke to 12 executives from 11 companies that provide performance optimization and monitoring solutions for their clients.
Here's what they told us when we asked, "What are your biggest concerns around performance optimization and monitoring?"
We’re not moving quickly enough to share and integrate the different viewpoints.
Challenges get us to granular service-oriented architecture tiers of redundancy. This is a good thing. Organizations develop a methodology moving to smaller teams with more iterative methodology. Evolve to adopt a methodology. Technical creation is benefitted by granularization.
Players are pretty involved from our own perspective. Application environments are very diverse for microservices to be successful need to interact with other systems.
The industry is well evolved and systemic concerns are ecological – the impact of throwing hardware at the problem. Small optimization differences have massive hardware implications. Designing around a commodity hardware scenario with regards to the hardware and the architecture.
There's a lack of expertise with data operations, data scientists, and BI analysts having knowledge of how the software works and how to use performance and monitoring tools.
In some cases, making performant service makes API worse. Some of the API methods are specifically kept simple, and no extra data is joined or served. This makes it harder for the client to use APIs in some cases. Also, forcing eventual consistency in most cases improves performance, but makes the client struggle with updates and forces them to use optimistic updates. If the application is integrated with other third-party applications, performance optimization should be executed carefully so as not to introduce a regression to how apps are communicating.
The biggest concern that the industry needs to address is when applications are deployed in a public cloud. Today, there are no real performance guarantees or SLAs when business-critical applications are deployed in a public cloud. Solutions like ours solve the on-premise, vendor agnostic, real-time monitoring challenge, but the same capabilities don’t yet exist in the public cloud. We will be addressing this gaping hole over time.
My biggest concern would be spending too much development time improving a part of the system that is not used.. Prioritization is important, go for the easy performance improvements first and then re-evaluate. Premature optimization is a common pitfall in software development.
Not correctly weighing the impact of optimization in a clustered application is an issue. Optimizing highly available services that share data is often a tradeoff that involves giving up consistency — visibility of changes to data between different application nodes. The more consistency you have, the slower the system will be. With the ubiquity of HA databases and clustering frameworks, it is common to see software being developed unaware of the choices made by such products with regards to using different consistency levels for different use cases. This dramatically affects the quality and speed of software.
Do we have the right balance? If we were trying to measure everything, we probably would be locked in data paralysis. Because we want our development teams to work as efficiently as possible, we must balance our optimization and performance projects with the fixes, new features, and innovation required to remain market leaders. Second, are we monitoring the key items correctly? We are basing a lot of important decisions on the data we are collecting. It’s important to make sure there are review stages that green light the queries and that analytics are providing the correct information in an intuitive fashion.
By the way, here's who we spoke to!
- Josh Gray, Chief Architect, Cedexis.
- Jeff Bishop, General Manager, ConnectWise Control.
- Bryan Jenks, CEO and Co-Founder, DropLit.io.
- Doru Paraschiv, Co-Founder, IRON Sheep TECH.
- Yoav Landman, Co-Founder and CTO, JFrog.
- Jim Frey, V.P. Strategic Alliances, Kentik.
- Eric Sigler, Head of DevOps, PagerDuty.
- Nick Kephart, Senior Director Product Marketing, ThousandEyes.
- Kunal Agarwal, CEO, Unravel Data.
- Len Rosenthal, CMO, Virtual Instruments.
- Alex Rysenko, Lead Software Engineer, and Eugene Abramchuk, Senior Performance Engineer, Waverley Software.
Opinions expressed by DZone contributors are their own.