Segregation of Duties on AWS
Segregation of Duties on AWS
It's like two-factor authentication IRL, and AWS can help with that.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
In the book, Accelerate, by Forsgren, et al., it states the following about Segregation of Duties:
What About Segregation of Duties?...First, when any kind of change is committed, somebody who wasn't involved in authoring the change should review it either before or immediately following commit to version control. Second, changes should only be applied to production using a fully automated process that forms part of a deployment pipeline. — Forsgren Ph.D, Nicole. Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations (IT Revolution Press.)
There's often a lot of discussion and confusion about Segregation of Duties (SoD) in enterprises and the Accelerate book helps clarify and provide actionable recommendations for implementation of SoD in organizations.
Segregation of Duties (referred to as Separation of Duties or Separation of Concerns in other publications) is a common concern particularly in enterprises where there's significant risk if systems are breached. We come across this frequently with our customers in the context of applying DevOps practices to these organizations. DevOps is about increasing the speed of effective feedback between customers and engineers. Therefore, when enterprises require multiple manual approvals, queues, boards, and teams, this effective feedback is significantly decreased. Consequently, it's important that while you're increasing this speed you're not in any way reducing the necessary controls that seek to reduce risk in these organizations.
While each company often applies SoD differently, we documented some of the heuristics and examples of what works well when deploying software systems on the Amazon Web Services (AWS) cloud:
- All changes to the software system of record (the application/service code, configuration, infrastructure, data and anything else that makes up a software system) are made only through code committed to a version control system. Using the principles of least privilege, some might have read-only access to production systems as well.
- For every commit, someone who wasn't involved in authoring the change reviews the code immediately after the commit to version control. For example, a pull request in the version control system — such as GitHub, CodeCommit, and Bitbucket
- The entire workflow is fully automated and specific controls are in place that log and limit access to production environments. All pipeline events are logged and can be audited at any point. Traceability exists for linking code commits to features, tests, and other artifacts in the issue tracking systems (e.g. JIRA or others)
These are the key services/tools/processes in which you need to control administrator/write access:
- Deployment Pipeline Tool — all automation is orchestrated from this tool (e.g. CodePipeline, etc.). The canonical version of this tool uses least privilege and all configuration changes are logged.
- Version Control — Mainline must have approved pull requests. There are approval logs for every code change.
- AWS services — ensure these are locked down from modification via IAM using principle of least privilege.
- Read-only access to all systems of record (environments, tools, etc.) in a deployment pipeline using least privilege.
All other resources should be locked down and have automation apply changes through versioned code. Just as with any other change, the code that composes this automation should also have a peer review as part of the deployment pipeline.
AWS Services That Support SoD
There are many services that provide capabilities that support the principles around segregation of duties. Here's some of the AWS services that support SoD:
- AWS CloudTrail — across all regions with log file integrity validation (to ensure non-repudiation)
- CloudWatch Monitoring, and Alerts — Receive notifications and respond to events
- CloudWatch Logs — Log all changes to all relevant services. Store logs in S3 and apply least privilege IAM and S3 bucket policies for access
- AWS CodePipeline — orchestrate the deployment pipeline automation
- and Config Rules — use and custom config rules to perform actions and/or get notified when services make modifications that violate corporate policies
- Encryption — There are many AWS services that provide encryption at rest and in transit that help prevent changes and data exfiltration. For encrypting data at rest, these include using the AWS Key Management Service (KMS) with CloudTrail, DynamoDB, EBS, RDS, and S3 — to name a few. For data in transit, using with ELB for applications.
- AWS Identity and Access Management (IAM) — Ensure least privilege for all AWS resources.
- Amazon Macie — to proactively monitor sensitive data (e.g. PII) entering the system
- AWS Service Catalog — to enforce policies while maintaining autonomy
Auditors are typically looking to see if any one individual is capable of making changes that are deployed to production systems without others knowing. So, they're looking to see that controls are in place that prevent someone from "going rogue" and circumventing the process. An educated auditor often appreciates the automation and auditing capabilities afforded by companies who have embraced effective DevOps practices and the principle behind the controls. They're also checking to see that permissions associated with services/tools cannot be circumvented by individuals without knowledge by others.
Adhering to the principle of least privilege, access to certain data is limited to those that have a need to know. Moreover, encryption at rest and in transit should prevent from having access to certain data values. For example, in most organizations, engineers might never even need access to the production data in any form since the databases and configuration can be generated and updated via versioned code. Having a good understanding of the type of data and how to create small but representative test data sets help increase the speed of feedback without giving engineers access to production data.
With the exception of production, there often isn't a need for approvals in the upstream environments because all of the environments are automated and locked. Each of the environments is based on code committed to version control and one of the purposes of the deployment pipeline is to test how the configuration applied to these environments affects the application/service behavior.
The intent of the segregation of duties is to have "more than one person required to complete a task." This could be the same team or a different team but it doesn't need to be multiple people and teams as long as there's the appropriate amount of tests, checks, and logs so that other interested parties are able to proactively monitor these changes through automated systems.
While least privilege should always be applied to sensitive data, there should be no issue with people spinning up development environments that aren't part of the system of record in order to experiment manually or otherwise with ideas that engineers might want to explore. This could be something as simple as a container that a developer might use locally or on AWS to experiment with some ideas that in no way affect the canonical system until they are translated to committed code in a version control system.
Releasing to Production
In addition to a pull request, there might be an approval to go to production where a human (or humans) must be present. For example, the product owner is responsible for approving the feature changes prior to releasing to production. The product owner clicks the button that performs the automation that deploys the changes to production.
Keep in mind that in an enterprise that effectively applies DevOps practices the cost of change is small because changes are made in small batches. Scarcity becomes less of a concern. That said, there are some changes that can have a destructive effect on systems and their organizations and so some teams ensure through policy and/or technology that people must be present in order to make a potentially destructive change. In some cases, this might include going to production.
For example, someone from technology and someone representing the business/product interests of the company must both be present to release a change or particular types of changes. In this case, use of the two-person rule can be considered for ensuring that two people must approve in order to click the button to release the software. Everything is still automated as part of the deployment pipeline, but technology that ensures there is someone that carries something they have (e.g. an MFA token) and someone else that holds something they know (e.g. a password) are present for these types of changes.
Enterprises can adhere to the principle behind Segregation of Duties without slowing down the need for effective feedback. This is especially true in AWS as all of the infrastructures and the rest of the software system can be defined in versioned code and go through the exact same process every time new code is committed as part of a single path to production.
Published at DZone with permission of Paul Duvall , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.