Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Use Synthetic Monitoring to Gather App Performance Metrics

DZone's Guide to

Use Synthetic Monitoring to Gather App Performance Metrics

See how a monitoring API and ELK stack can help you gather metrics and monitor performance benchmarks for your mobile app.

· Mobile Zone
Free Resource

Download this comprehensive Mobile Testing Reference Guide to help prioritize which mobile devices and OSs to test against, brought to you in partnership with Sauce Labs.

One of the upsides by adopting synthetic monitoring is that you can gain lots of valuable metrics that depict the overall performance of your mobile apps. When you and your team agree on what app performance metrics to measure, it's time to gather these data and push them to your own DevOps dashboards so every responsible individual/team has access to. But before you can actually get to your dashboard, you need to know how to collect these metrics and select the tool with which your dashboard is presentable.

In our Mobile Performance Benchmark project, we use the Bitbar Monitoring service to gather request timing information for top US retail mobile apps. As you have already known, we have been monitoring the performance of the search functionality in these apps. And one of the specific requests to benchmark these apps is to perform a search for headphones in the mobile apps. We think gathering metrics of that request's duration over time would give information on the stability, load and user experience of each app.

In Bitbar Monitoring, one can reuse their existing automated functional tests (e.g. Appium scripts) to monitor the critical user flows and key functionalities. At the same time, it provides a feature that captures full network stats of each run performed. As the search for headphones was done in the app by an Appium script, the monitoring service would capture that HTTP request sent to the API of the store and measure its response time. The goal was to collect that request's response time from runs and draw a graph of the response time in the monitoring dashboard.

When it comes to the tool for displaying this graph, we have been using ELK stack (Elasticsearch, Logstash, Kibana), which is a popular open source tool stack for log monitoring and dashboard creation. The idea was to push all the network traffic capture data through Logstash to Elasticsearch and then use Kibana to query Elasticsearch for the correct information. Kibana can display graphs of data over time, so a custom dashboard displaying the headphones search time could be created.

The Monitoring API provides three ways to get the collected traffic capture information.

The first and the most manual method is to get the whole traffic capture in HAR format. Each monitoring run in the Bitbar Monitoring service contains a link to a HAR file containing all the requests and responses made by the application during the run. Then the file can be read for example using a Python script and each request-response pair sent to Logstash to be saved into Elasticsearch.

To configure Logstash to accept HAR entries, a filter must be configured. We used the following logstash filter configuration to allow HAR entries to be uploaded:

filter {
  # if this is being send to /har, then assume that it is a "entry" element from a har recording
  if ( [index] == 'har' ){
    json {
      source => "message"
    } 
    # use the timestamp from the request start
    date { match => [ "startedDateTime", "ISO8601" ] }
  }
}

To get the HAR file using the Monitoring API, one can use for example bash:

# This script needs your API token in variable TOKEN,
# the check ID in CHECK and number of runs to get HAR
# file for in ITEMS
for URL in $(wget -O- --header "Authorization: Bearer ${TOKEN}" "https://monitoring.bitbar.com/api/results/${CHECK}?items=${ITEMS}" | jq -r .results[].assets.har); do
  RUN_ID=$(echo $URL | sed 's/.*\/runs\/[0-9]*\/\([a-z0-9\-]*\)\/.*/\1/')
  HAR=session_${RUN_ID}.har
  wget ${URL} -O ${HAR}
  if [ -s "${HAR}" ]; then
    echo "Got har!"
    # You could send it to Logstash here, for example using Python
  fi
done

Then the HAR file can be uploaded through Logstash to Elasticsearch. We used Python for simplicity:

import simplejson as json
import requests

# URL to Logstash should be stored in variable “logstash_host”
# Path to HAR file should be stored in variable “harfile”
json_data=harfile.read()
har=json.loads(json_data)
log = har['log']

for entry in log['entries']:
  # Delete response content since it is not required to be stored
  del entry['response']['content']
  r = requests.put(logstash_host+"/har",data=json.dumps(entry))

After this, you should be able to see each request and their responses in Elasticsearch using Kibana. Each request entry in the HAR file also contains timing information for the request. This timing information can be used for our search time dashboard.

Running all these scripts seems like a lot of work. Luckily, Bitbar Monitoring API can do the work for you.

The second way of getting traffic capture information using the API is requesting the HAR entries directly:

GET https://monitoring.bitbar.com/api/runs/{{run_id}}/harstats?items=100

Making this HTTP request to the API using a specific run's ID, the API will return the HAR file's entries already parsed for you:

Then you could use a similar Python script as before to push these entries through Logstash to Elasticsearch.

If this is still not easy enough and you know exactly what URL's request you want to follow, the Monitoring API can also directly provide you with historical data across multiple runs. Make a request to the following endpoint to get the data you need:

{
  "harstats": [
    {
      "url": "http://bitbar.com",
      "startedDateTime": "2017-08-25T12:34:08.621Z",
      "time": 553,
      "responseCode": 200,
      "responseSize": 1159,
      "timings": {
        "blocked": -1,
        "connect": 0,
        "dns": -1,
        "receive": 0,
        "send": 13,
        "ssl": 0,
        "wait": 539
      }
    },
    ...
  ],
  "more": true
}

This request will return each request from the HAR files of recent runs for which the URL matches the given URL regular expression in the query.

Hope you find it helpful and we will cover more on how to make a dashboard in Kibana using HAR entries in the coming blog. Stay tuned.

Analysts agree that a mix of emulators/simulators and real devices are necessary to optimize your mobile app testing - learn more in this white paper, brought to you in partnership with Sauce Labs.

Topics:
logstash ,kibana ,elk stack ,elasticsearch ,mobile ,mobile performance ,monitoring

Published at DZone with permission of Jouko Kaasila. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}