Changes in Performance and Monitoring
Changes in Performance and Monitoring
Proliferation of the cloud and increased complexity of applications.
Join the DZone community and get the full member experience.Join For Free
Learn how error monitoring with Sentry closes the gap between the product team and your customers. With Sentry, you can focus on what you do best: building and scaling software that makes your users’ lives better.
To gather insights for DZone's Performance and and Monitoring Research Guide, scheduled for release in June, 2016, we spoke to 10 executives, from nine companies, who have created performance and monitoring solutions for their clients.
Here's who we talked to:
Dustin Whittle, Developer Evangelist, AppDynamics | Michael Sage, Chief DevOps Evangelist, Blazemeter | Rob Malnati, V.P. Marketing and Pete Mastin, Product Evangelist, Cedexis | Charlie Baker, V.P. Product Management, Dyn | Andreas Grabner, Technology Strategist, Dynatrace | Dave Josephson, Developer Evangelist, and Michelle Urban, Director of Marketing, Librato | Bob Brodie, CTO, SUMOHeavy | Christian Beedgen, CTO and Co-Founder, Sumo Logic | Nick Kephart, Senior Director Product Marketing, ThousandEyes
We asked these executives, "How has performance and monitoring changed since you began working on it?"
Here's what they told us:
- APM is evolving into digital performance management. Every owner of a smart device should be able to use a feature of your app or website. We want to understand the user need and the user journey by capturing every user every where they are accessing your app or site.
- When we have a cloud there is no access to the full stack, only access to the API is made available. End-to-end user experience (UX) is the way to monitor. We identify the problem but do not tell the client how to fix it, we use APM for that. The new cloud architectures are changing very quickly. Docker used to be the big thing but now in a server-less environment does Docker even have a place in production?
- There wasn’t much when we started. Basic server monitoring. Then New Relic came along in 2008 and allowed you to monitor for $150 per server providing real-time statistics on how the applications, databases, browsers, and disks are performing with standardized ratings. We're much more advanced today. When something does go wrong, an alarm goes off and tells us where the code is failing.
- People leave the internet up to chance without realizing they can make decisions that affect their business, after the ISP, to make successful transactions. We differentiate and look at the data to see what’s going on beyond the originating ISP. Networks need to understand the best ways to interact with customers and other businesses. We identify the “mean time to innocence” – how quickly can IP issues be identified as being inside or outside of your system so you can resolve the problem more quickly. The more you know the more quickly you can address the problem.
- DevOps and continuous delivery have changed the way we create software. There’s been a shift in how we create and manage software though we’re going back to the same needs in DevOps that we saw in Waterfall. There’s a resurgence in monitoring different elements of the business process. Synthetic user monitoring is coming back but it's being done with different tools.
- Apps have changed and people’s expectations have changed. People expect monitoring tests to determine if the app is performing the way it’s expected to perform. They want visibility into their applications. Apps have changed to where everything is native mobile, people using containers and building microservices. The increase in complexity has made it hard to see the link between two containers. Clients want to understand the UX at the mainframe, website, or mobile device.
- There are more competitors in the space. DevOps has a need now – monitoring is part of the DevOps equation. There are lines between DevOps and development and they get disparate signals. One is operation, the other is developer oriented and neither sees or knows what the other is doing or what’s considered “healthy.” Bleeding edge companies (i.e. Twitter, Facebook, Google, and NetFlix) are pushing to machine learning and metacomputation to identify what’s happening. Netflix uses Atlas, Google open TSDD. System administrators need to see the size of the queue. There needs to be more inclusive monitoring.
- We were founded in 2010 because we sensed a change in the market with companies relying on IaaS and SaaS to deliver services to companies or customers. More things needed to be monitored as more services were outsourced. We adopted our model to deliver service, better user experience, different use cases, features, and products. Cloud-based delivery customers share similar content delivery networks (CDNs), cloud providers, etc. Where you have a shared infrastructure you can benefit from monitoring others’ data. We’re seeing a movement to unlock data across customer boundaries so clients can benefit from one another.
- Three things: 1) IT is undergoing a huge transformation moving from the data center to the cloud. Abstractions are changing. As such, the type of monitoring you can do is changing. You lose visibility with AWS or Heroku. Host metrics alone are not the answer since you can no longer see the host. There’s a lot of information in logs. 2) Applications are more complicated with new levels of abstractions and microservices. The bigger challenge is to know what happened along the way. 3) The market is evolving from pure play APMs to alternatives without instrumentation.
What have been the biggest changes in performance and monitoring that you've observed?
Opinions expressed by DZone contributors are their own.