Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Securing Cloud-Native Applications

DZone 's Guide to

Securing Cloud-Native Applications

Securing cloud-native applications present unique challenges that have to be approached.

· Cloud Zone ·
Free Resource

Image title


Cloud-native applications build on the cloud delivery model to leverage the elastic, resilient, and Agile advantages of cloud computing. Cloud-native is about how applications are architected and deployed, not whether these are running in public or private clouds. The cloud-native approach builds applications from loosely-coupled microservices, running them in dynamically-orchestrated containers and taking advantage of horizontal scaling with continuous delivery to improve reliability, time to market, and performance.

Securing online applications requires more than filtering application traffic based on 5-tuples. Platform and application vulnerabilities, application-level attacks, and application abuse requires a deeper understanding of the intent behind the traffic. Traffic inspection and analytics, behavioral modeling, machine learning, and automation are essential to get ahead of the threats to online applications.

Service meshes are the latest in microservice architectures and build upon a resource orchestrator such as Kubernetes to create scalable, end-to-end encrypted service fabrics. Service meshes add a consistent infrastructure layer that controls the service-to-service communication, abstracting away the protocol interfacing and communications that individual services used to implement each in their own way. A service mesh introduces a common component in each microservice that implements protocol handling and reliable service routing with consistent telemetry and logging, adding filtering and decentralized location services. Matt Klein, a software engineer at Lyft and the brain behind Envoy, calls it the "universal data plane” which allows streamlined operations and development for large microservice deployments.

Envoy is a self-contained, high-performance server with a small memory footprint that runs alongside any application language or framework. Envoy is typically deployed as a "sidecar" with the microservice. The envoy sidecar process runs in the same Kubernetes Pod as the service and as such shares the same network stack. Using iptables, the Envoy proxy can transparently inject itself in the network traffic between the service in the Pod and services in other Pods within the infrastructure. Envoy handles service discovery, load balancing, automatic retries, circuit breaking, rate limiting, etc. Services only need to know about the local Envoy and do not need to concern themselves with network topology or whether they are running in development or production zones. Envoy provides deep observability of L7 traffic, native support for distributed tracing, and wire-level observability of MongoDB, DynamoDB, and more. Envoy has first-class support for HTTP/2 and gRPC for both incoming and outgoing connections and is a transparent HTTP/1.1 to HTTP/2 proxy. Envoy also provides APIs for dynamically managing its configuration.

Developers can now focus on the real problem without having to spend time on service discovery, retries, circuit breaking, or bother with encryption and authorization. Operators can now easily monitor the service mesh with consistent telemetry and logging and create dashboards that provide global health and alert on potential issues without having to normalize and correlate logs of different platforms and developers.

The configuration API of Envoy allows its integration with controllers that orchestrate deployment and configuration of the service mesh as a whole. Istio, for example, provides easy deployment of the sidecars and can manage the certificates on each microservice to allow end-to-end encryption and authentication across the service mesh.

Communication between the Envoy proxies can be secured using mutual TLS. The benefit of using mutual TLS is that the service identity is not expressed as a bearer token that can be stolen or replayed from another source. Istio also provides the concept of Secure Naming to protect from server spoofing attacks - the client-side proxy verifies that the authenticated server its service account is allowed to run the named service. Istio Auth has a Certificate Authority and automates key and certificate management for the service mesh. It generates key and certificate pairs for each service, distributes the keys and certification to the appropriate pods. Istio auth also rotates those keys and certificates periodically and revokes specific certificates when necessary.

Note that Envoy with Istio is just one open-source option for a service mesh. Linkerd and Consul Connect provide an alternative open-source solution.

Cloud-Native Security

Cloud-native applications require cloud-native security solutions! Bolting traditional security solutions will quickly revert the service mesh to the central chokepoint design which requires decrypting and re-encrypting all traffic flows. The right way to secure an end-to-end encrypted service mesh is to push the protection down to the microservice level, closer to the application at the level where traffic is decrypted by the side-car.

Leveraging the deep L7 observability and the encryption termination provided by Envoy, an active protection element can be realized either as a plug-in for the side-car or as a second side-car injecting itself transparently in the traffic stream between the service mesh side-car and the microservice. A plug-in provides more intimate integration with the service-mesh and can leverage the existing service mesh control plane. The plug-in, acting as an active protection element, needs to provide the policy enforcement as well as traffic inspection and analysis.

Policy enforcement can leverage the functionality of the host process (the side-car). Envoy, for example, natively supports rate limiting and filtering. Alternatively, the plug-in should also be able to instruct its host to redirect or inject traffic for additional challenges or fingerprinting of the end-point, enriching the contextual information on the intent and nature of the communication.

Since policy enforcement and traffic inspection are distributed, the plug-in itself sees only part of the traffic. There is no reason to assume that the same end-points will interact with only one service or a specific instance of a service. As such, analytics and intelligence need to be centralized in the security solution. Whether full packet copies are required by the central analytics and inspection or just meta-data, like query strings and headers, will depend on the nature of security solution. Some inspections can be performed locally on the microservice while meta-data kept central for correlation and global analytics. It is, however, important to keep the added latency and footprint of the sidecar as minimal as possible to ensure its transparent nature from a microservice perspective.

The control plane services of the security solution, the central brain so to speak, should themselves be implemented as microservices and behave as service end-points in a service mesh, providing elastic scale and flexibility of deployment.

Security Control Plane Services

Cloud-native application protection does not require much different security features compared to traditional web applications and APIs. The most significant change is about distributing enforcement and centralizing inspection and analytics.

Securing cloud-native web applications and exposed APIs can build on the same functionality offered by existing "traditional" Web Application Firewalls (WAF). The industry coined a name for cloud-native web application firewalls: 'Next Gen WAF'.

Online applications and APIs are also becoming more frequent targets for abuse by bad bots (OWASP Automated Threats to Web Applications). Managing bot traffic to allow good bots to grow your business while preventing bad bots from inhibiting growth is imperative for the performance of an online business.

Denial of Service attacks can originate from within the service mesh. Intentional of unintentional, one microservice can generated a flood of requests that cascade down and amplify themselves through the mesh. Traffic pattern analytics on the global level can detect and mitigate these.

As with cloud-native applications, the cloud-native security solution is about the architecture of the solution and not where the services are running. Parts or whole of the security control plane services could be running in the private cloud, side-by-side with the cloud-native applications, or they can run as services in a public cloud protecting private and public deployments of cloud-native applications.

Topics:
cloud ,security ,cloud-native application architectures ,cloud-native ,istio ,envoy ,waf ,web app security

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}