What The World’s Worst DNA Mixup Teaches Us About Monitoring
Join the DZone community and get the full member experience.
Join For FreeIt turns out the infamous ‘phantom’ wasn’t a murderer at all. Police were hunting an innocent factory worker who fatefully handled the same cotton swabs used to collect DNA samples from the crime scenes. With her DNA already on the swabs, lab results from several investigations pinned a hypothetical crime spree on her.
The Cost of Contaminated Cotton
Forensic scientists thought they’d controlled all testing variables in the lab, but an outside shipment of ‘dirty swabs’ contaminated results and sent detectives on a wild, but futile, goose chase that cost police more than 14,000 man-hours and $18 million. The lesson? If you rely on data for answers, you better know what factors can impact it. Just one factor, one skewed variable can slant your data and disrupt your test or experiment.
The same lesson applies to Web Performance Monitoring. No DevOps or engineer wants to chase down problems outside of his/her control, like being woken up at 2 AM because a consumer ISP blew a fuse in NYC. Skewed or misleading monitoring data can cost IT teams’ time investigating false positives, prolong troubleshooting and hurt overall web performance.
Controlling the Wild, Wild, Web
Getting clean and accurate data of your website’s performance on the Internet, the most volatile of environments, is very challenging. There are simply too many factors that impact what the end-user experiences, many of which are unknown or not reproducible.
Synthetic testing, or active monitoring, helps by testing in a controlled environment where you define what page/endpoint is being tested, what frequency, what city and ISP the test is being performed from, etc. This controlled environment will produce consistent results, any deviation in the data would point to an issue that must be addressed. Because it is a controlled environment – you can also troubleshoot. Synthetic testing allows to run tools like traceroutes, screen capture and packet capture alongside the test. Real User Measurement, on the other hand, is not always reproducible, has many uncontrolled variables and does not allow for on the spot troubleshooting. The benefit of RUM is that you are measuring real user experience and can easily correlate the data to revenue and other engagement metrics.
All-in-all
When a lab loses control (as in the case of the DNA swabs), data becomes skewed, leading scientists to chase down the wrong culprit or conclusion. That’s why it’s important for DevOps to test in a regulated setting through Synthetic monitoring. This gives DevOps the mechanism to monitor performance with fixed variables in a mostly contained system.
Forensic Scientists don’t have the time to pre-test every cotton swab that enters the lab for DNA and IT teams are far too swamped to check every measurement.
Published at DZone with permission of Mehdi Daoudi, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Implementing a Serverless DevOps Pipeline With AWS Lambda and CodePipeline
-
Fun Is the Glue That Makes Everything Stick, Also the OCP
-
Working on an Unfamiliar Codebase
-
Does the OCP Exam Still Make Sense?
Comments