DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Proactive Security in Distributed Systems: A Developer’s Approach
  • Event-Driven Microservices: How Kafka and RabbitMQ Power Scalable Systems
  • Optimizing Software Performance for High-Impact Asset Management Systems
  • How to Build Real-Time BI Systems: Architecture, Code, and Best Practices

Trending

  • Securing the Future: Best Practices for Privacy and Data Governance in LLMOps
  • Modern Test Automation With AI (LLM) and Playwright MCP
  • Immutable Secrets Management: A Zero-Trust Approach to Sensitive Data in Containers
  • Start Coding With Google Cloud Workstations

What's Difficult About Problem Detection? Three Key Takeaways

There will always be problems that won’t be caught with conventional means, and those are often the ones needing the most attention.

By 
Emily Arnott user avatar
Emily Arnott
·
Sep. 19, 22 · Analysis
Likes (1)
Comment
Save
Tweet
Share
3.5K Views

Join the DZone community and get the full member experience.

Join For Free

Welcome to episode 4 of our series "From Theory to Practice." Blameless’s Matt Davis and Kurt Andersen were joined by Joanna Mazgaj, director of production support at Tala, and Laura Nolan, principal software engineer at Stanza Systems. They tackled a tricky and often overlooked aspect of incident management: problem detection.

It can be tempting to gloss over problem detection when building an incident management process. The process might start with classifying and triaging the problem and declaring an incident accordingly. The fact that the problem was detected in the first place is treated as a given, something assumed to have already happened before the process starts. Sometimes it is as simple as your monitoring tools or a customer report bringing your attention to an outage or other anomaly. But there will always be problems that won’t be caught with conventional means, and those are often the ones needing the most attention.

Our panel comes from diverse backgrounds, with Laura working with a very new and small startup and Joanna focusing on production at a large company, but each had experience dealing with problem detection challenges. The problems that are difficult to detect will vary greatly depending what you’re focused on observing, but our panel found thought processes that consistently helped.

Listen to their discussion to learn from their experiences. Or, if you prefer to read, I’ve summarized three key insights in this article as I have for previous episodes (one, two, and three).

Losing Focus on the Truth and Gray Failure

You might think of your system as having two basic states: working and broken, or healthy and unhealthy. This binary way of thinking is nice and simple for declaring incidents, but can be very misleading. It may lead you to overlook problems that exist in the gray areas between success and failure.

Kurt Andersen gave an example of this type of failure that’s becoming more relevant today: gray failure in machine learning projects, resulting from the learning data drifting away from reality. Machine learning projects can give very powerful results, using a process where an algorithm is fed tons of classified data until it learns to apply the same classification for new data. For example, an algorithm can be trained to identify species of birds from a picture after being shown thousands of labeled pictures.

But what happens when the supplied data starts drifting away from accuracy? If the algorithm starts misidentifying birds because it starts working from bad data or works with incorrect patterns, it won’t throw up an error. A user trying to learn the name of a species won’t likely be able to tell that the result is incorrect. The system will start to fail in a subtle way that requires deliberate attention to detect and address.

Laura Nolan pointed out that this type of gray failure is an example of an even more general problem — how do you know what “correct” is in the first place? “If you know something is supposed to be a source of truth, how did you double check that?” she asked. “In some cases there are ways, but it is a challenge.”

There’s no single way to detect “incorrectness” in a system when the system’s definition of “correct” can drift away from your intent. What’s important is identifying where this can happen in your system, and building efficient and reliable processes (even if they’re partially manual) to double check that you haven’t drifted into gray failure.

Detecting the Wrong Problems and Mixing Up Symptom and Cause

Another big challenge in problem detecting: even if you’ve detected a problem, is it the right problem? Joanna gave an example of your system having an outage or another very high priority incident that needs dealing with. When you dive into the system to find the cause of the outage, you end up finding five other problems with the system. This is only natural, as complex systems are always “sort of broken.” But are any of these problems the problem, the one causing impact to users?

Matt shared an example from his personal life. When getting an MRI to detect a problem with his hearing, doctors found a congestion in his sinus. It wasn’t causing his hearing issues, and furthermore, the doctors guessed that if they gave an MRI to anyone, they’d probably have the same sinus problem. Some problems will be detected that, while certainly problems, are ones that most systems simply “live with” and are unrelated to what you’re trying to find.

However, this “functioning problem” isn’t a guarantee of safety. Laura discussed how a robust system can run healthily with all sorts of problems happening behind the scenes. This can be a double-edged sword. If eventually these problems cumulate into something unmanageable, it can be difficult to sort through the cause and effect of these problems that have been piling up. For example, if you find your system “suddenly” runs out of usable memory, it could be because of many small memory leaks, individually unnoticeable, adding up. At the same time, the problems resulting from insufficient memory can seem like issues in themselves, instead of just symptoms of this one problem.

These tangled and obscured causes and effects are inevitable in a complex system. At the same time, you can’t overreact and waste time on every minor problem you see. Tools like SLOs, which proactively alert you to when an issue will start impacting customer happiness, can help you strike a balance.

Problems Occurring in the Intersection of Systems

Focusing on the user experience can help you understand some problems that are otherwise impossible to detect, even ones that occur even when your system is functioning entirely as intended. These problems can result from a situation where your system is behaving as you expect, but not as how your user expects. If the user relies on certain outputs of your system, and it produces a different output, it can create a hugely impactful problem without anything appearing wrong on your end.

Laura gave an example of this sort of problem. Data centers will use a tool known as a “class network” as a method of redundancy and reliability. To put it simply, this tool will create a new backup link if a link fails, generally providing uninterrupted connection over this small and common failure. However, a customer’s system might react immediately to the link failing, causing a domino effect of major failure resulting from the mostly normal operations of the data center. So where does the problem exist, in the data center’s system or the user’s system? Both are functioning as intended. Laura suggests that the problem lies only in the interaction between the two systems — difficult to detect, to say the least!

Another high-profile example of this type of problem happened with a glitch where UberEats users in India were able to order free food. In this case, an error given by a payment service UberEats had integrated with was incorrectly parsed by UberEats as “success.” The problem only occurred in the space between how the message was generated and how it was interpreted.

This example teaches a good lesson in detecting and preventing this sort of problem. Building robust processes for handling what your system receives from other systems is essential — you can’t assume things will always arrive as you expect. Err on the side of caution, and have “safe” responses to data that your system can’t interpret. Simulate your links with external systems to make sure you cover all types of output that could come in.

We hope you’re enjoying our continued deep dives into the challenges of SRE in "From Theory to Practice."

systems

Published at DZone with permission of Emily Arnott. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Proactive Security in Distributed Systems: A Developer’s Approach
  • Event-Driven Microservices: How Kafka and RabbitMQ Power Scalable Systems
  • Optimizing Software Performance for High-Impact Asset Management Systems
  • How to Build Real-Time BI Systems: Architecture, Code, and Best Practices

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!