Robservations On a CyberSecurity Podcast
Want to learn more about these cybersecurity practices that will ensure your company's success? Click here to learn more.
Join the DZone community and get the full member experience.
Join For FreeI was listening to a recent podcast that was recorded as part of the Silicon Valley Code Camp, sponsored by Electric Frontier Foundation and hosted by CISO Security Vendor Relationship, a panel of experts outlined a series of best practices for being able to win in the cybersecurity space. In this article, I wanted to share some of the highlights gleaned and provide some thoughts on each in turn.
First: Success in CyberSecurity Involves a Lot More Than Just IT
Most organizations tend to this of cybersecurity as an IT related problem first — and in a lot of cases — only. This is due largely to the fact that our cybersecurity challenges come with the explosive growth that has occurred in technology. However, if we only envision security as something IT has to deal with, we greatly underestimate its impact. In this podcast, panelist Geoff Belknap, the CSO for Slack, said: “… the reality is that it very much has become a part of your business success, your business strategy, and something key to the growth of the business.”
At its core, security is about trust. Today, a lot of trust in applications is assumed by the end user. When I use a web browser to go to a bank site and then login into the financial center, there is a huge amount of assumed trust in play. First, I am trusting that the bank has their website correct and secured. Then, I am trusting that I will see my account information. At the same time, I am trusting that no one I have not explicitly authorized will be able to see that same information. Once it is visible, I am trusting that it is accurate and current. I am trusting that I will be able to take appropriate action and that any action I do take will be executed the way I intended it to be, with the outcome I expect. And all that happens in just a few minutes.
Now, think about the scenario where any of those things I just mentioned goes wrong — how much confidence do you think I have in that institution. The point is simple: people simply do not continue to do business with organizations they cannot trust.
This is a culture change, and change is hard. It takes strong leadership and trust — you can read more of my thoughts on that in this article. It stands to reason that a large part of a successful cybersecurity strategy is how the rest of the organization perceives the security team and how much they are valued. IF other parts of the organization are afraid to engage with the security team because they think their projects will be delayed or they will have to go through a lot of unnecessary changes, then it will be difficult to have a lot of success. Conversely, when security is an equal part of the design, applications building, and outcomes, then it permeates every part of the organization, and success is far more likely. And, this leads into the second point.
Second: Successful CISOs Are Ones That Align Their Priorities With Those of the Business
If the goal of a CISO is to ensure that applications, systems, networks, endpoints, access, and so on are secure, then the reasons behind why and how they are secured need to correlate with the overall goals and direction of the business. For instance, if you are a bank and make the world’s most secure ATM machines but it takes two or more minutes just to connect to your account because you have to deal with multi-faceted authentications and security measures, you are likely going to lose customers quickly. It is even more likely if your customer is sitting in their living room and it takes that long to login to your site. Today, security is an intimate and assumed part of every user experience. We need to be every bit as interested in security as part of an exceptional user encounter as we are the functionality and performance. And, knowing that a human being is at the center of the discussion leads to the third point.
Third: People Will Make Mistakes — Empathize
We all understand that security systems are inherently working to prevent unintended, wrong, or harmful usage. However, in doing so, they do not make moral decisions about the underlying intent. Humans, on the other hand, do. Humans are largely able to ascertain whether a given behavior was malicious or mistaken, and then be able to take appropriate action. Our security systems need to account for this because it is part of the overall experience, and as stated earlier, that is a critical part of success for the business. I thought that panelist Ashan Mir, CISO of Autodesk, provided great insight by stating “there are no stupid users, just non-empathetic security professionals. If your threat model does not take into account the failure of a human, then your security model is bad. … Our security teams is responsible for figuring out what would fail, how it would fail and assume failure at every step, and build security systems accordingly – and that includes building the security culture." When we understand that a failure is an option at each stage of our process and that mistakes are one of the causes, we tend to be more empathetic — and resilient. We are able to spend much more time resolving issues then ascribing blame. And, the notion of less blame is also at the heart of a successful DevOps strategy. For more information, see Chapter 15 of the Google Site Reliability Engineering book. These improved practices lead us to the fourth point.
Fourth: Out-of-the-Box Thinking Is Critical to Being a Successful Cybersecurity Pro
I really like this notion because cybersecurity is a multi-faceted problem. It is one in which you are constantly having to think and react to things that have either not been done to a system yet or done in ways that were never intended. Hackers are looking for things to exploit and even the best developers don’t think through EVERY possible way in which a piece of their code might be used. And in most cases, they tend to think in terms of “happy paths” and intended usage.
I learned this lesson very early on in my career. I was working as regression tester for a software company that did finite element modeling software for engineering firms. One day, I was playing around with the tool and I was building a simple 2-D model that was combining two surfaces with a fillet. For those that do not know, a fillet is simply a rounded edge of the design — think of soldering two pieces of metal together. Anyway, one of the parameters the software asked for was the fillet radius and it is measured in degrees. Because I was playing around, I entered different values to see what it would look like. When I entered a value of “90," the software crashed and the window died. So, I did what any good tester would do — I re-opened the software, tried it again, and yep… crashed again. So, I filed a defect because, well, the software shouldn’t crash right? Fast forward a few days and suddenly one of our most senior engineers is standing in my office doorway asking me if I actually filed this defect. His issue was that it is a mathematical impossibility to have a filet with a radius of 90 degrees (you essentially end up with a divide by zero error – you can see a discussion on this here), and no structural engineer in their right mind would ever enter that as a valid value in our tool. The engineer was entirely right, and he really didn’t want to invest the time in fixing a bug he thought no one would ever run into, so we went to see his manager. When the manager asked me why I had filed it, I told him that I thought it was bad and that my bad value entered caused the tool to crash and I pointed out that it would be possible for someone to mistakenly enter 90 instead of 9.0 or something like that. The manager sided with me and that bug got fixed, and a week or so later, I was able to validate that it had been fixed.
The point here is that I had found an edge case that was generally “unreasonable” but had a catastrophic outcome for our tool. Considering that our software was being used by some of the largest aerospace firms in the world, that was a problem we didn’t want in the field, and who knows, maybe we saved companies lots of lost time and broken models because we fixed this before it became an issue. Cybersecurity needs the same thing. It’s why “ethical hacking” exists. In order to beat a hacker, sometimes you have to think like one and use systems in completely unintended ways. The ones that tend to do this best are the ones that will rise to the top. And, they are also the ones that do this last point best.
Final Thoughts
Cybersecurity is no longer something that can be assigned to just a small team of experts dedicated to monitoring and remediation. The problem is too large and complex and will scale beyond a team quickly. And once trust is broken, it is awfully hard to get it back, especially in today’s world where it seems like there are multiple options for everything. To be successful in cybersecurity, you need to think differently, align differently, and share responsibilities different. Those that embrace the change well will succeed, and those that don’t will find themselves left behind.
Opinions expressed by DZone contributors are their own.
Comments