At some point, your enterprise is going to realize that it has to get serious about improving the performance/quality of your mobile applications. There’s a lot of confusion/misunderstanding about what’s involved in monitoring performance of mobile applications. In this blog I’m going to describe the overall process of what it takes to get started and how mobile application performance monitoring works in production.
In talking with a lot of companies about their enterprise mobile applications, I’ve noticed a trend for how mobile initiatives frequently get started. In many instances, someone in marketing or on the business side desperately wants a consumer-facing app for some new project or initiative for the company brand, be it for a promotional effort or some other type of campaign.
But the enterprise IT group doesn’t have the internal skills for native mobile application development and mobile apps in general are outside the domain of the traditional IT Ops group and it would take way too long for them to gear up to do it, so the business uses their discretionary, project, or advertising/marketing budget to engage with an agency or a consultancy to build the app in time for the project at hand.
Then, once the app is out in the wild among your customers for its initial purpose, the following process will inevitably happen:
- the desired scope of the application will expand as the company decides they want to use it for other purposes with additional features and functionality beyond the original minimally viable product definition,
- there will also certainly be problems with the app due to unanticipated issues or as a result of bugs and performance problems introduced by new features that got added later like tying the app into the enterprise IT infrastructure for services needed to support the new features,
- users will be upset by the problems/issues and write scathing reviews in the app stores or on social media, which will give the app poor rankings. They may even delete the app entirely,
- the company will realize that the poor app experience is hurting the company brand and costing it customers and good will, and
- there will be frantic calls that somebody has to DO SOMETHING about it to make it right.
At this point it’s likely that IT will have to get involved because now the app is become a critical part of the company’s business strategy and is tied into the enterprise IT infrastructure. The IT Ops group will already most likely be familiar with Application Performance Management (APM) for the enterprise apps that they are responsible for managing, but since they didn’t develop and manage the mobile app, they are probably not familiar with or know how APM works for mobile applications.
Monitoring the performance of mobile applications is a bit different than traditional IT Ops APM where the server side applications run on server infrastructure managed by IT in a corporate datacenter or private or public cloud environments. The main difference is that tradition IT enterprise apps are directly managed by the IT group itself, where they frequently are responsible for building (developing) the app, and then deploying it and managing it on the server infrastructure where they have direct access to it and control over it.
In the mobile application ecosystem, there is a level of indirection between the application and the process of accessing, controlling, managing and monitoring the application. In the traditional APM world, IT can add monitoring of the application performance via the application infrastructure without having to modify the app itself because they have that direct access to the systems where the application is being hosted and run.
The process for adding monitoring of the performance of mobile application as it is being used by your customers isdifferent due to the level of indirection involved and the lack of direct access to the devices where the application is running.
Since the mobile application is being used directly by your customers on their personal or corporate mobile phones, the mechanism of monitoring the performance of the app “in production” is called Mobile Real-User Monitoring or Mobile RUM. Figure 1 shows the overall process of adding AppDynamics Mobile Real-User Monitoring to your mobile applications so you can monitor their performance.
The first thing you have to do is to incorporate the AppDynamics Mobile RUM SDK into your native mobile application.
- Developers download the iOS or Android SDK from the AppDynamics website
- Developers use the respective integrated development environment (IDE) which is xCode for iOS apps and Eclipse with Android Developer Tools plug-in or Android Studio for Android apps
- Developers compile the appropriate SDK into your application using the corresponding IDE
- As part of your testing and QA process you can have a limited number of beta user testing your app and monitoring the performance before you release it to the app store
- Distribute the beta using one of the available beta distribution mechanisms
- Developer submits the new version of your application to the appropriate app stores
- Once the app has been approved, the new version of the app will appear in the app store once it has been approved
- App is available in the app store for customers to download the new version
Data Exchange for Mobile Application Performance Monitoring
Once the end-user has updated to the new version of the app and starts using it, then a number of data flows will be triggered.
- The first thing that happens is that at some point the app will make some request via the network to some back-end infrastructure. As part of the request, the AppDynamics Mobile RUM agent that was built into the app via the SDK will automatically detect the request and will add an AppDynamics identifier to the header.
- When the response is sent back to the mobile application, additional information will be added to the header including GUID (Global Unique Identifier) which uniquely identifies that particular request for later analysis, the time that it took that particular Business Transaction to execute, the average BT execution time, and an identifier for that particular BT
The next part of the data flow depends on how you have chosen to implement your deployment options. AppDynamics offers flexible deployment options including a pure SaaS option, and pure on-premise option of a mixed hybrid SaaS/on-premise option.
The first option is to choose to use either the Mobile RUM Cloud SaaS data collector or the on-premise Mobile RUM Server. In step 3 of the data flow, the AppDynamics mobile application agent will send information to the Mobile RUM Cloud (3A in the diagram) or the Mobile RUM Server (3B in the diagram) including the object ID, the NSURL (in the case of iOS or the Android equivalent), any crash API data from a previous crash, and any custom data that you may have chosen to collect for your application.
The Mobile RUM Cloud or Mobile RUM Server collect this data from all of the mobile application clients that your customers are using, does some processing/aggregation of the data and then passes it on to the next step. There is no permanent data storage in this step.
The second option is to choose to use either the AppDynamics SaaS Controller or the AppDynamics on-premise Controller. In step 4 of the data flow, the processed/aggregated data is sent from either the Mobile RUM Cloud or Mobile RUM Server to either the SaaS Controller or the on-premise Controller.
The Controller is where all of your application performance data is correlated, baselined, stored, and accessed for monitoring, alerting, analysis, and action by all of the people in your organization that are involved in the running, maintenance, operation, and business of your application.
Your employees access the Controller via the AppDynamics web-based portal where they can have role-specific views of your application performance data and can work to collaborate to resolve issues faster (troubleshooting, problem identification and isolation) via the War Room or monitor the business via custom performance operations and business/executive dashboards.