DZone
Performance Zone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
  • Refcardz
  • Trend Reports
  • Webinars
  • Zones
  • |
    • Agile
    • AI
    • Big Data
    • Cloud
    • Database
    • DevOps
    • Integration
    • IoT
    • Java
    • Microservices
    • Open Source
    • Performance
    • Security
    • Web Dev
DZone > Performance Zone > Focusing on MTTR With Synthetic Monitoring

Focusing on MTTR With Synthetic Monitoring

Doing QA in prod means shifting from a focus on Mean-Time-Between-Failures to a focus on Mean-Time-to-Recovery. A technique for this is synthetic monitoring.

Martin Fowler user avatar by
Martin Fowler
·
Serge Gebhardt user avatar by
Serge Gebhardt
·
Flavia Fale user avatar by
Flavia Fale
·
Jan. 26, 17 · Performance Zone · Opinion
Like (3)
Save
Tweet
3.17K Views

Join the DZone community and get the full member experience.

Join For Free

Synthetic monitoring (also called semantic monitoring) runs a subset of an application's automated tests against the live production system on a regular basis. The results are pushed into the monitoring service, which triggers alerts in case of failures. This technique combines automated testing with monitoring in order to detect failing business requirements in production.

In the age of small independent services and frequent deployments, it's very difficult to test pre-production with the exact same combination of versions as they will later exist in production. One way to mitigate this problem is to extend testability from pre-production into production environments. This is the idea behind QA in production. Doing this shifts the mindset from a focus on Mean-Time-Between-Failures (MTBF) towards a focus on Mean-Time-To-Recovery (MTTR).

A technique for this is synthetic monitoring, which we used with a client who is a digital marketplace for cars with millions of classifieds across a dozen countries. They have close to a hundred services in production, each deployed multiple times a day. Tests are run in a pipeline before the service is deployed to production. The dependencies for the integration tests are not used. Instead, the tests run against components in production.

Here is an example of the tests that are well-suited for synthetic monitoring. It impersonates a user adding a classified to her list of favorites. The steps she takes are as follows.

Go to the homepage, log in, and remove all favorites, if any. At this point, the favorites counter is zero. Select some filtering criteria and execute the search. Add two entries from the results to the favorites by clicking the star. The stars change from gray to yellow. Go to the homepage. At this point, the favorites counter should be two.

In order to exclude test requests from analytics, we add a parameter (such as excluderequests=true) to the URL. The parameter is handed over transitively to all downstream services, each of which suppresses analytics and third-party scripts when it is set to true.

We could use the excluderequests parameter to mark the data as synthetic in the backend datastores. In our case, this isn't relevant since we re-use the same user account and clean out its state at the beginning of the test. The downside is that we cannot run this test concurrently. Alternatively, we could create a new user account for each test run. To make the test users easily identifiable, these accounts would have a specific pre or postfix in the email address. Another option would be to have a custom HTTP header that would be sent in every request to identify it as a test, though this is more common for APIs.

Our tests run with the Selenium WebDriver and are executed with PhantomJS every five minutes against the service in production. The test results are fed into the monitoring system and displayed on the team's dashboard. Depending on the importance of the tested feature, failures can also trigger alerts for on-call duties.

A selection of at the top of the are well suited to use for synthetic monitoring. These would be UI tests, user journey tests, user acceptance tests, or end-to-end tests for web applications, or Consumer-Driven Contract tests (CDCs) for APIs. An alternative to running a suite of UI tests — for example, in the context of batch processing jobs — would be to feed a synthetic transaction into the system and assert on its desired final state such as a database entry, a message on a queue, or a file in a directory.

Synthetic monitoring Testing Production (computer science)

Published at DZone with permission of Martin Fowler, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Progressive Web Apps vs Native Apps: Differences and Similarities
  • Is DataOps the Future of the Modern Data Stack?
  • When Writing Code Isn't Enough: Citizen Development and the Developer Experience
  • Purpose-Driven Microservice Design

Comments

Performance Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • MVB Program
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends:

DZone.com is powered by 

AnswerHub logo