Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Risk Assessment: When Nothing Is Better Than Something

DZone's Guide to

Risk Assessment: When Nothing Is Better Than Something

Are you adequately evaluating security risks to your devices? Check out this post that talks about the growth of security risks to software and devices.

· Security Zone ·
Free Resource

Discover how to provide active runtime protection for your web applications from known and unknown vulnerabilities including Remote Code Execution Attacks.

Risk

Any reasonable information security system consists of two fundamental components: (1) a risk assessment; and(2) controls that minimize those risks. In this article, I want to talk about the risk component of risk assessment and the companies that sell cybersecurity products and cybersecurity services — the controls.

Understanding the concept of "risk" is tricky for a number of reasons. It is not black and white — risk is a nuanced concept.

First, we need to understand that the study of risk is not just a dry, boring exercise confined to finance, cybersecurity, insurance, actuarial tables, program management, and the like. Each of us informally manages and evaluates risk each day. We even do it unconsciously.

Consider the potentially lethal bowl of cereal you ate this morning. The grain could contain ergot that could poison you. The milk may not have been pasteurized properly, exposing you to pathogenic listeria bacteria. You rely upon people or organizations — risk proxies — to provide you with milk and grain that is safe to consume. Even then, you still are ultimately responsible — a miscalculated ratio of milk to cereal could cause you to choke to death.

The point is, we evaluate so many risks so often that the process becomes a habit, and because we do this risk calculus so automatically, we often develop bad habits.

These bad risk calculation habits — that often lead to adverse consequences — can be grouped into two categories: (1) an inability to discriminate between good data and bad data; and (2) an emotional rather than reasoned decision-making process. Individuals who have mastered these bad habits are often the recipients of a Darwin Award, an award that recognizes individuals who have supposedly contributed to human evolution by selecting themselves out of the gene pool via death by their own actions.

Organizations that manifest these same bad habits usually fail.

Regarding the perception of risk, it has been well established that peoples’ perception of risk is different from the reality of risk. For example, your risk of dying in a bathtub or by a fall (or even by falling in the bathtub) is far, far greater than the risk that you will be killed by terrorists; yet, the visceral threat of airplanes hitting buildings, beheadings, or immolation is much more emotionally compelling than the threat of a bathtub.

Regarding bad data, all programmers are familiar with the concept of GIGO — Garbage In, Garbage Out. Economists and statisticians understand that no amount of clever manipulation will ever allow them to extract any meaningful information from bad or dirty data. This concept is so idiomatic that even pigs understand it: "You can't make a silk purse out of a sow's ear."

And, yet, almost every day, silk purses of some type or another are published by providers of cybersecurity products and services.

My concern is with the almost complete lack of transparency that surrounds the claims, and in particular, the quality of both the raw numbers and statistics, that firms, whose sole existence is predicated upon the sales of security products and services, make available to their buyers and to the public.

Scott Adams (the author of the comic strip, Dilbert) coined the portmanteau, confusopoly, which describes the current information security product and services marketplace.

Image title

“A confusopoly is a situation in which companies pretend to compete on price, service, and features, but, in fact, they are just trying to confuse customers so that no one can do comparison shopping.”

Within the information security industry, everyone is aware of the term FUD. FUD is an acronym derived from the phrase, Fear, Uncertainty, and Doubt. FUD and confusopoly are practically synonymous, and both have had and continue to have a major influence on purchasing decisions and assessment of risks within the field.

Quantifying risk in any environment — finance, healthcare, insurance, home mortgages, even breakfast — requires good data. In the field of cybersecurity, we almost always view data that is derived from surveys, and we almost always are denied access to the raw survey data.

Statistics

Let's step back and evaluate the data for a moment. Descriptive data is data that is one step removed from the raw data and, yet, encapsulates, in some way, an amount of raw data that is often less understandable in its raw form. An example of a simple descriptive datum is a baseball player’s batting average. For example, Ted Williams had 7706 at-bats during his 19 years in baseball. Of those at-bats, he had 2654 hits. These two numbers are the raw data that define the descriptive statistic of "batting average." In Ted’s case, his BA was 2654/7706 or .344. Most baseball aficionados won’t remember the raw data, but they will remember and understand that his BA of .344 was exceptional.

A short list of additional simple descriptive statistics include the mean (average) of a distribution, the median (a point in a distribution in which half of the data lie on one side of the mean and half lie on the other side), the correlation coefficient (a normalized measurement of how two variables are linearly related), the variance of a distribution (an indication of the spread of the data), and the standard deviation, which is simply the square root of the variance. The point I'd like to make here is that given a set of raw data, the descriptive statistics described above are almost always necessary in order to understand and draw conclusions about the raw data.

After reviewing about a dozen of the major cybercrime surveys that have come out in the past two years, I've come to the realization that not a single one of them contain data that can be trusted or that is statistically significant in any way. In the only two cases where I have been able to find a study that provides both the mean and the median values for losses or incidents, the data (such as it is) is in both cases extraordinarily heavy-tailed. In other words, in both studies, the average is extraordinarily higher than the median.

This means that several outliers account for a disproportionate number of the total. This skews the average or means distributions to the point that the distributions become meaningless, in that they do not reflect reality. However, these skewed distributions provide a significant FUD boost.

In addition to the lack of publicly available backing statistical data in cybercrime statistics, there is an additional significant complication. Almost all of the publicly available cybercrime statistics rely upon survey data, and survey data — particularly in the area of cybercrime — is difficult to get right. The best arguments for this can be found in the document, Sex, Lies and Cyber-crime Surveys from Microsoft Research.

Our risk-based thinking and calculations rely upon data that is robust to the degree that the answers given in cybercrime surveys are both representative and reliable. Survey error, sample error, heavy-tailed distributions, and methodological bias, combined with an almost complete lack of statistical context provide us with "data" that is neither representative nor reliable. As if this weren't bad enough, almost all of the statistics that we see are presented in the form of “infographics,” which are thrice removed from the raw data and are consequently, impossible to fairly evaluate.

For example, in two widely quoted surveys by two security industry heavyweights, in 2012, Symantec estimated cybercrime losses at ~ $US 110bn, while in 2009, McAfee's estimate of the same type of losses was an order of magnitude higher. The $890bn difference is certainly not a rounding error and suggests that there exist significant errors (or bias) in one or both studies. Thus, we are obligated to make our risk calculations and assessments based entirely on FUD.

While this situation works in favor of security vendors, it negatively impacts the community of security practitioners who rely upon the vendor statistics to plan and implement security controls. And frankly, it is embarrassing, or it should be to the industry as a whole.

Cybercrime is a growth industry. It is simple to find real data on property crime, personal crime, automobile crime, etc. Isn't it about time that we had the same data on cybercrime available?

NOTE: I have found two studies that have promise: VERIS and the NetDiligence Cyber Crimes Survey for 2014.

As always, your comments are encouraged and welcomed.

Find out how Waratek’s award-winning application security platform can improve the security of your new and legacy applications and platforms with no false positives, code changes or slowing your application.

Topics:
risk ,risk assessment ,risk and compliance ,risk management ,risks ,security

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}