Technology moves fast. In the case of server technology, it’s actually moved faster than we could name it. We’ve moved from physical servers to virtual servers to… well, a new kind of server—that sometimes isn’t a server. These latest-generation servers are at the center of a revolution that’s changing the fundamentals within companies everywhere. Securing these servers requires a new approach.
In the last few years, we’ve seen well-intentioned cyber security professionals trying to stop or slow this revolution. They were driven by sincere concerns that this server revolution creates new security challenges, but with no way to respond. Here’s how Infosec professionals can get with the revolution instead of trying to fight an unwinnable battle. Securing the new data center requires a new approach; but let’s start at the beginning, before things got so complicated.
We started with physical servers, which you could touch. A physical server represented hundreds or even thousands of hours of work. These servers were like pets; someone loved them. If they were sick, someone would nurse them back to health. We’ve all spent our share of sleepless nights on a raised floor, nursing servers back to health.
A few years ago, virtual servers (which were just software versions of physical servers) became all the rage. Initially, the driver behind virtualization was cost reduction. A single physical box could host many servers, thereby reducing the number of physical servers – often substantially.
Virtualization created two main challenges for Infosec professionals. First, many security solutions are designed around the assumption of plentiful overhead. Minimizing overhead increases the number of servers on each physical box, so Infosec tool requirements were often at odds with the drive to reduce the number of physical servers (and therefore cost). Second, virtual servers can be copied or moved easily, increasing the risk of data theft.
Infosec professionals were able to secure servers in spite of these added challenges. A combination of policies and product tweaks allowed them to use the same tools and procedures that had done the job in the physical server era. But, just as we were getting used to virtualization, the world changed again. For some, this change is still to come, but it is coming and it cannot be stopped.
What’s driving this change is that the software development world has changed dramatically. Application development that used to take months or years is now done in weeks or days. Development is faster and cheaper, and the resulting applications are more scalable and have fewer bugs. Businesses love these new development methods because they can react to customer needs in record time. And in business, like in the old West, you’re either quick or dead.
In this new world, there’s a new type of server that is now the infrastructure ‘building block.’ These servers are virtualized, but unlike their predecessors they are not hand-built; they are cloned. They are also modularized, making scaling easier and eliminating single points of failure. If one server malfunctions, it’s deleted and replaced with another clone. Nobody loves them; nobody is going to pull an all-nighter on a raised floor to nurse them back to health.
Applications have always been hosted on servers, but there was a clear separation between application architecture and server architecture. Today, cloned servers have become modular application building blocks. These cloned servers also lead much shorter lives. Infrastructure is now provisioned and decommissioned automatically, based on demand. Physical servers, and their first-generation virtualized counterparts, lived for years. These new generation servers are on-demand and are replaced on a schedule or when the application’s life ends, whichever comes first. One of our customers has measured the average life span of their servers at 9 hours.
Along with this unprecedented rate of change, we are also seeing a dramatic rise in the use of containers (led by Docker). Containers, like their traditional server siblings, are application building blocks. They are more efficient and can be created faster than traditional virtual servers, adding more elusive moving targets for security teams.
Servers are no longer empty vessels into which we pour our applications: servers are the applications. The key driver is now speed. Virtualized infrastructure and the orchestration tools that come with it are driving unprecedented speed and agility for businesses. But traditional security methods and tools aren’t built for speed. Security has to change or be left behind.
Here are 7 best practices that security teams need to adopt to move faster while protecting today’s servers:
- Take advantage of server cloning. Agent-based security solutions are ideal for cloned server environments because they can be added to master images thereby ensuring every cloned server instance is protected regardless of duration or location. This enables continuous development methods like DevOps instead of slowing things down by trying to provision security after deployment. The benefits are near instant visibility and policy enforcement, regardless of scale.
- Leverage servers as application building blocks. In today’s server environments, servers are configured as one of a small number of building blocks. The ideal security solution allows for the creation of detailed security policies for each building block type. These policies, when combined with agent-based architecture, will protect every building block server from the time they boot up.
- Small footprints matter. With today’s servers, resource utilization is directly related to costs. Heavy security solutions adversely impact VM density in data centers and lead to surcharges in public clouds. This is particularly true in environments that scale on-demand, since security overhead costs are multiplied as infrastructure grows.
- Minimize staff overhead. Many Infosec teams have more tools than they have staff to manage them. The ideal security solution for today’s servers should require no maintenance and should be “set and forget.”
- Don’t lock yourself in. Most servers still live in data centers, and almost all companies will end up leveraging public cloud infrastructure to help manage costs. Implementing and maintain separate tools (one for public cloud, one for the data center, etc.) is not only time consuming and costly, but also slows security. So choose a solution that works seamlessly in any environment.
- Limit Server Communications. Server firewalls should be configured to only allow communications as required by the application modules. All other connections should be blocked. This will decrease the attack surface and protect against lateral movement of threats between servers in your data center, which are often missed by network security tools.
- Integrate instead of rolling your own. A security platform that supports built-in integrations with popular SEIMs, directories, and infrastructure orchestration tools will avoid long hours of custom development work and protects the investments you’ve already made.