DevOps Security at Scale (Part 6): Technology Adoption
DevOps Security at Scale (Part 6): Technology Adoption
In the age of DevOps, the software security model needs to change — and that's where security policy as code comes in.
Join the DZone community and get the full member experience.Join For Free
DevOps involves integrating development, testing, deployment and release cycles into a collaborative process. Learn more about the 4 steps to an effective DevSecOps infrastructure.
This is the sixth and final blog post in a series discussing how high-performing DevOps teams build secure systems at scale.
Technology moves fast, but many security teams have historically been viewed negatively by developers as laggards: enforcers of outdated policies and resistant to the adoption of modern tools. For some security teams, this may be a deserved reputation. But for the security teams working in high-velocity DevOps environments, this couldn’t be further from reality.
The pace of technology adoption in DevOps has been nothing short of astonishing. Countless SaaS products and open source solutions exist for CI, CD, configuration management, orchestration, and beyond. The security teams that have been successful in this fluid landscape have been the ones most willing to adapt to these new systems, not resist them.
A Sea Change in Security Philosophy
Inferior security teams see new technology in a negative light. To them, modern tools represent a growing threat surface of ever-increasing attack vectors and potential vulnerabilities. Superior security teams take a more “glass half full” view: if adopted safely, new tools can allow the business to move faster than before, or even make the organization safer.
Traditional IT security philosophy was established in the static world of on-premise data centers and well-known assets. Heavy “front doors” like perimeter firewalls were the primary defense against attackers. Internal security systems focused on managing the organization’s mostly predictable set of people and machines. LDAP, Kerberos, and Active Directory became popular for identity and role management. As new machines were purchased or new staff members were hired, they would be manually configured into these systems. Modern software organizations have recently embraced dynamic infrastructure based on microservices, containers, orchestrators, and serverless technology. Attempting to apply traditional security philosophy to such a fast-changing environment simply doesn’t work.
World-class security teams have already adapted their DevOps culture to adapt to this new world. Instead of focusing on a “lock it down” view of their production assets, their approach would be better described as “support the business in moving fast, and doing so safely.”
New Technology Improves Security
It’s perhaps counter-intuitive, but adopting new technology can help organizations be more secure. It just has to be done judiciously and safely.
Embracing new technology enables new kinds of security protections to be put in place, it helps security workflows keep up with the fast pace of IT progress, and prevents shadow IT initiatives.
For example, BeyondCorp is a security model from Google that transforms the typical deployment model for internal applications. Instead of placing sensitive applications behind a VPN, Google instead deploys those applications to the public Internet. Access to those apps is governed by the accessor’s identity, not by which network they are on. This is in drastic contrast to traditional IT policy, but it has enabled massive velocity and flexibility for Google and the other organizations who use it. Conjur applies a similar philosophy to its product by applying identities to all dynamic computing assets, as well as human users.
Other developer-centric technologies are radically reducing the amount of sensitive assets that engineers need to access. This is a broader win for security, as it helps enforce better separation of duties. For example, PaaS systems like Pivotal Cloud Foundry and deployment orchestrators like Kubernetes offer clear isolation for developers that is wholly separate from the operational view of the system.
The serverless revolution has even more potential for dramatic security improvements. Developers working with VMs or containers will be very close to the operating system, where many potentially sensitive or risky assets live. Those assets are completely inaccessible in a serverless or Function-as-a-Service (FaaS) model. Safely embracing that kind of technology should be an easy proposition for most security teams. Why worry about monitoring and protecting assets from developer access when you can just remove them from the equation instead?
Old Thinking vs. New Ideas
The traditional model of IT security focused on figuring out which assets to control, then on building structures to lock them down. All workflows and process layered on top were built with this centralized control model as a fundamental assumption.
The new approach works from an assumption that it will no longer be possible to map an organization’s set of systems. Instead, strong but simple baseline rules are applied: every entity must have an identity, privilege changes must be communicated clearly through policy, duties are clearly separated, and business velocity must be enabled through automation. And with this new philosophy, modern DevOps teams have maintained their high velocity of feature delivery, and done so without sacrificing security. It’s only a matter of time before this new approach becomes the accepted model for all technology organizations.
DevOps Security at Scale Series
Published at DZone with permission of Brian Kelly , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.