{{announcement.body}}
{{announcement.title}}

Why Enterprise-Grade Cybersecurity Needs a Federated Architecture

DZone 's Guide to

Why Enterprise-Grade Cybersecurity Needs a Federated Architecture

Want to learn more about enterprise-grade security?

· Security Zone ·
Free Resource

Large enterprises often have their data centers spread across different geographic regions so that they can easily locate their apps closer to their employees and customers. It also enables them to comply with data residency requirements and provide disaster recovery for critical business applications. And with the adoption of public cloud, it’s even easier for organizations of all sizes to distribute their workloads across multiple regions as we see with AWS, for example, which now spans 20 geographic regions around the globe.

With this type of regional distribution, ensuring that data is secure couldn’t be more important, and this is where micro-segmentation becomes invaluable to protect an organization’s high-value assets, or “crown jewels" as some enterprises call them. Not only that, but visibility is key because you can't secure what you can’t see. Whether micro-segmentation or otherwise, all cybersecurity solutions need to have the same local and global survivability as the rest of the enterprise in a way that is easy to manage and easily scalable given various resource constraints.

Based on all of the potential challenges and requirements of massive modern enterprises, as a software vendor responsible for architecting and engineering a micro-segmentation security solution, we identified and evaluated four key customer criteria: resiliency, scalability, manageability, and bandwidth efficiency. For each of these three architectures – centralized, distributed, and federated – we mapped out the impact they would have on each criterion.

Centralized Architecture

This is perhaps one of the most common architectures because it works well if you only have a handful of workloads in just one location, for example, so there’s no need to scale globally. With a centralized architecture, the controller or policy engine (in our case called the Policy Compute Engine (PCE)) resides in a single location and has the following implications on the four criteria:

  • Resiliency: This is perhaps the biggest concern with this type of architecture as you have a single point of failure, which limits your resiliency, so it could severely impact your business continuity unless you have a solid disaster recovery plan in place – it can be risky for that very reason

  • Scalability: Centralized allows for both vertical and horizontal scale to support workloads worldwide so there’s the possibility to add in other controllers, but then, the potential challenges of resiliency still exist as the controllers are acting independently of one another – putting massive scale in one location may be technically feasible but it means you are putting “all your eggs in one basket” if something were to happen to that region

  • Manageability: This is probably the most desirable attribute of a centralized architecture as it makes it easy for teams to configure and apply cybersecurity policy across the entire infrastructure from a single location. Also, Role-Based Access Control (RBAC) defines and limits a regional team’s access to view and modify policy only for applications that they require and are specific to their region

  • Bandwidth Efficiency: Centralized uses more bandwidth as all telemetry from the workloads that the controller interacts with must be sent back to the central controller. And since the controller is also typically responsible for sending policy and configuration information back out to the workloads, the bandwidth increases with the number of workloads or users located in each region and the number of connections between them all

Distributed Architecture

In this type of architecture, there are multiple independent controllers with one controller placed in each data center or public cloud region. It’s attractive to some software vendors as you start out by building a product that’s centralized and then customers deploy that in each geographic region. You essentially have a distributed collection of centralized deployments, and therefore, you don’t have to change the product in any way, shape, or form. The primary reason it can be challenging is due to manageability.

For example, let’s say you have security policies distributed across all these independent systems, but if you want to change anything, i.e. make a global policy change, you have to go to each individual controller and change it. If the change is not applied consistently across the controllers, there is the opportunity for synchronization issues, which in the best-case scenario can cause inconsistent or unpredictable implementation of policy – in the worst-case scenario, it can lead to downtime of the systems themselves.

That all said, distributed has the following implications on the four criteria mentioned:

  • Resiliency: This type of architecture is highly resilient so that failure to a controller in one region has no impact on the other regions. Distributed does not suffer from the ‘single point of failure’ issue of centralized architectures

  • Scalability: Distributed can scale with the number of workloads in each data center and the total number of workloads worldwide by simply deploying more controllers

  • Manageability: This architecture allows regional teams to create local policies but, as I mentioned above, is challenging when enforcing global policies, as they need to be manually replicated for each region, which can lead to synchronization and consistency issues

  • Bandwidth Efficiency: Distributed is more efficient than centralized when it comes to bandwidth as all of the data is localized to each region. Workloads need only communicate with their designated local controller (and vice versa)

Federated Architecture

A federated architecture combines the strengths of centralized and distributed and is, therefore, a kind of “best of both worlds” approach. With federated, a controller is placed in each data center or public cloud region (just like distributed), but those multiple controllers act in concert so as to provide the abstraction that there is one centralized controller. All of the controllers in a federated architecture communicate with each other to share information about the organization’s security policy as well as the workloads that are being secured.

This type of architecture is the best when it comes to securing global infrastructure at scale. And, as is typically the case when writing enterprise-grade software, making the right architectural choice and then implementing it in an elegant way required our architects and engineers to spend a little more time and be a little more thoughtful. Our ultimate goal was to deliver an enterprise-scale architecture that delivered the benefits of a federated architecture without the downsides of distributed and centralized. Here is how that federated architecture ticks all of the boxes when it comes to the four criteria:

  • Resiliency: As with a distributed architecture, this type of architecture is also highly resilient such that a controller failure in one region doesn’t affect the others

  • Scalability: A federated architecture allows you to scale with the number of workloads in each data center as well as the total number of workloads by deploying more controllers – again, it’s all about securing global infrastructure at scale

  • Manageability: Federated makes it easy for global security and application teams to configure and apply policies consistently across the entire infrastructure. As with centralized, RBAC is also used to provide regional teams with limited access to view and modify policy for the applications that are unique to their region

  • Bandwidth Efficiency: When the minimal amount of info is shared between controllers for the system to function, federated is more bandwidth-efficient than either centralized or distributed, making it ideal for enterprises that require a more streamlined exchange of data and information

The four criteria were all met with a federated approach, which we can see in the below chart:

Image title

In our enterprise software solution, the controller is the PCE — the brain of the micro-segmentation solution, and it is responsible for orchestrating micro-segmentation policy across (global) workloads and other enforcement points in the infrastructure. It also collects telemetry data from the infrastructure – such as network flow and insight into the processes running on the workloads. The PCE takes all that telemetry and information to create a map of all the communications and traffic flows across and within your enterprise. Again, our central tenet is that you can’t secure what you can’t see, so part of operating at an enterprise scale is helping customers visualize all the communication pathways that exist within their organizations.

We have called our federated architecture the PCE “Supercluster” to reflect true “enterprise grade” and “enterprise scale” cybersecurity come to life. This is on its way to becoming the new normal and you’ll no doubt be hearing more and more about “supercluster” as enterprises continue to embrace micro-segmentation as a fundamental component of their cyber strategy.

Topics:
enterprise security ,microsegmentation ,network security ,scalability ,resilience ,manageability ,bandwidth ,app security ,security ,cybersecurity

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}