Enforcing and Monitoring Security on AWS S3
Up your security game with AWS S3.
Join the DZone community and get the full member experience.
Join For FreeI am an avid follower of AWS Online Tech Talks YouTube channel. It is a useful way to stay up-to-date on new or existing AWS features and services; I find it helpful to refresh and retain knowledge. Recently, I encountered a webinar about AWS S3 security, which triggered me to relook at my S3 policies and settings. I decided to consolidate some S3 security features and properties. In this article, I'll discuss the changes I made, along with some examples and my two cents.
What’s the Incentive?
Typically, in my day-to-day use of S3, security and permissions are not being changed regularly. In most cases, we set the security definitions at the time the S3 bucket is created and then forget about it. We do not bother to revalidate these security settings periodically.
Nevertheless, this practice is risky for a few reasons. Firstly, permissions can be changed advertently or inadvertently. Such changes can lead to compromising data without our awareness; preventing these cases is our obligation.
Secondly, the initial assumptions can be obsolete over time. In many cases, we start with a specific set of permissions that becomes irrelevant after a while; the concepts we first started with may evolve, so the permissions that were allocated initially need to as well. Thus, data can be saved under inappropriate permissions. Having a misalignment of permissions can lead to broad data access, more than required. When looking through the security administrator spectacles, compartmentalization and roles’ breadth access should be reassessed over time.
You may also like: Getting Started the Right Way with AWS.
Thirdly, a case when bucket’s permissions do not match its content is not rare. The changes in our repository are frequent — existing files are being altered, new files are added. Problems arise when users do not adhere to the organization’s principles or if data is saved in the wrong place. These cases are ubiquitous and can lead to discrepancies between the S3 buckets’ content and their permissions.
To summarise this introduction, there are many incentives to continuously re-examine permissions and compartmentalization settings. A periodic review of permissions is not a good-to-have procedure, but it is a must if we want to ensure our data is protected adequately. This blog-post focuses on:
- S3 Access Control, including blocking public access and object lock
- S3 Encryption (server-side and client-side)
- Monitoring Security
Ready? Let’s start!
Part 1: Managing Permissions
The best practice is to start with minimal permissions and then add more permissions over time. This approach is better than starting with broad access control and narrowing it later. This method ensures that we grant permissions only when it's a necessity; always whitelist and then blacklist. You can whitelist what you’re familiar with, and you cannot blacklist the unknown.
S3 permissions are divided into three main features:
- Bucket Policy.
- Access Control List (ACL).
- Block Public Access.
Another element to manage data governance is Object lock, but it does not fall under a classic permissions category.
AWS S3 Bucket Policy
The foundation building block of bucket policy is IAM permissions. These permissions are cross-region; they define what the entity (user/role) can do in AWS and can be applied to one or more AWS services. By default, the IAM role is empty; meaning it does not have any permissions. In our context, IAM roles are used in a bucket policy.
The bucket polity defines who can access an S3 resource. It is tied to a bucket. By default, buckets are private — just the bucket owner and the root account have access to the bucket.
Bucket policies can allow cross-account access to S3 buckets without using IAM roles, but using the user in the other account.
Below you can find an example for a bucket policy that allows CloudFront to run a GET command on this bucket.
{
"Version":"2008-10-17",
"Id":"PolicyForCloudFrontPrivateContent",
"Statement":[
{
"Sid":"1",
"Effect":"Allow",
"Principal":{
"AWS":"arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ERIM00000OAK9J"
},
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::my-logs/*"
}
]
}
Harnessing S3 Objects Tags to Manage Permissions
S3 object tags can be useful for enforcing permissions too, besides the other metadata and labeling benefits. The value of a tag can be a condition in a bucket policy. In that scenario, permissions are applied simply by tagging objects.
In the policy example below, all objects with the tag public are granted public access:
{
"Version": "2008-10-17",
"Id": "MyBucketPolicy",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-logs/*",
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/access-level": "public"
}
}
}
]
}
After applying the policy above, AWS indicates the bucket has public access

Since a bucket policy can enforce DENY rule too, the tagging feature can be used to block access to objects based on their tag.
A policy condition’s key must follow a standard format. It can be either an AWS predefined key (such as s3:VersionId
), or it can be a user-defined key. The latter should follow the format <s3:ExistingObjectTag/<tag-key>
. The following link is an excellent place to start with object tagging’s condition keys.
AWS S3 Access Control List (ACL)
The Access Control List manages the permissions of several grantees:
- AWS authenticated users (with AWS account).
- Everyone (for public access).
- Log Delivery: relates to a particular log delivery account. This account must be granted with permission to allow collecting logs from the bucket. Read more in this link.
The permissions of the ACL are inclusive by definition (Allow only, without the option to Deny/Block operations). These permissions mandate the following actions: List objects, Write objects, Read bucker permissions, and Write bucket permissions.
ACL’s definitions are broad and overly permissive; therefore, it is recommended to use IAM or bucket policies instead, as they allow more granular definitions.
Blocking Public Access Restrictions
Exposing buckets or objects to everyone by mistake is a prevalent phenomenon. Mostly, it is human error. The purpose of Blocking Public Access group of restrictions is to gain control of public access, which can be granted through ACLs or bucket policies. Although granting or preventing public access can be achieved via the other bucket permissions, this set of restrictions overrule any inadvertent public permissions that were set earlier. The recommendation is to select Block all public access option to ensure complete prevention of public access.

There are four security settings to modify the public access of items in the bucket; two relate to ACLs, and two to buckets policy.
- Block public access to buckets and objects are granted through new access control lists (ACLs). The public access permissions will be applied to newly added objects; also, it prevents the creation of new public access ACLs for existing objects. Note that any existing permissions for existing objects, which allow public access to S3 resources using ACLs, will remain intact.
- Remove the public access that was granted to objects through any access control list (ACLs). This setting overrules all ACLs that grant public access to objects; once this definition is removed, the public access ACLs will become valid again (no need to reconfigure).
- Block public access to objects granted through new public bucket policies. New bucket policies that grant public access to objects will be blocked, but existing policies that allow public access to S3 resources will remain.
- Block public and cross-account access to objects through any public bucket policy. S3 will ignore public and cross-account access for buckets with policies that grant public access to buckets and objects.
When the Block Public Policy is turned on, trying to grant public access yields an error message:

S3 Object Lock
Object lock is a relatively new feature that was announced in November 2018. It can be defined only as part of a bucket creation process after enabling Versioning. Once the bucket is created, the Object Lock feature cannot be applied.
This feature should be used only if you need to prevent objects from being deleted, so mostly it is required to ensure data integrity and regulatory compliance. The lock period can be defined in a later stage after the bucket has been created.

S3 object lock provides two retention modes:
- Governance mode.
- Compliance mode.
Both retention modes have a retention period, but they differ by the possibility to bypass the lock during this period.
In Governance mode, you can grant permission to override or delete objects in spite of the fact the bucket is considered locked. The user must have s3:BypassGovernanceRetention
permission and must explicitly include x-amz-bypass-governance-retention:true
in the header of any request that requires overriding governance mode (this header is included by default when using AWS Console). In addition, the retention period cannot be changed without proper permissions.
The Compliance mode is more strict. Any user, including the root account, cannot override or delete objects under the retention period. You can move from governance mode to compliance mode, but not vice versa. Hence, it is advisable to use the governance mode to test retention period settings before forcing a compliance mode retention period.
Part 2: S3 Encryption
There are two types of encryption: encryption in-transit and encryption at rest. In-transit encryption is securing the channel while data is transported from the client to the server and vice versa. By default, the AWS CLI and the console communication are encrypted, as well as API calls (HTTPS).
Data at rest encryption secures the object while it is saved on the disk. In case encrypted files are obtained by an unauthorized party, they cannot be open without the encryption key. AWS data encryption at rest provides a few options that vary by the method of keeping and maintaining the encryption keys.
In the first three methods, the encryption key is saved on the server (SSE stands for Server-Side Encryption); at the fourth method, the encryption key is held and managed by the customer (CSE stands for client-side encryption).
- SSE-S3: S3 manages the encryption keys.
- SSE-KMS: the encryption key is managed at the KMS.
- SSE-C: the encryption key is managed by the customer, but the cryptographic operation executes on the server-side.
- CSE: the client manages the encryption keys.
Setting the server-side encryption on an S3 bucket is a one-time operation. Once applying encryption, it is being enforced only on newly created items; so, in case the bucket is not empty, the existing files will not be encrypted.
SSE-S3
This encryption is the easiest way to enforce data encryption at rest. All you need to do is tick a button, and from this point on, all the new objects will be encrypted.

Under the hood, each object is encrypted with a unique key. To ensure the key is protected, AWS encrypts the unique keys with a master key, which is rotated regularly.
SSE-KMS
The Key Management Service (KMS) keys can be managed by AWS or by the customer. The customer-managed keys are obtained from AWS, an external source, or CloudHSM. The advantage of using KMS over SSE-S3 is the tightened control over the keys.
AWS automatically generates a default key in the region when the first SSE-KMS — encrypted object is added to a bucket; this key can be used across all regions. Nevertheless, you can define your key, which gives you flexibility in managing and controlling it. These features are not available for AWS generated keys.
The customer-managed keys are controlled by a user who can decide when to rotate, revoke, or delete them. Moreover, accessing and using the keys can be restricted; each key is wrapped with a policy that defines which users can execute an operation on it.
An example of a policy of a user-generated KMS key:
{
"Id":"key-consolepolicy-3",
"Version":"2012-10-17",
"Statement":[
{
"Sid":"Allow access for Key Administrators",
"Effect":"Allow",
"Principal":{
"AWS":"arn:aws:iam::831111111:user/MyAdmin"
},
"Action":[
"kms:Create*",
"kms:Describe*",
"kms:Enable*",
"kms:List*",
"kms:Put*",
"kms:Update*",
"kms:Revoke*",
"kms:Disable*",
"kms:Get*",
"kms:Delete*",
"kms:TagResource",
"kms:UntagResource",
"kms:ScheduleKeyDeletion",
"kms:CancelKeyDeletion"
],
"Resource":"*"
}
]
}
Defining the SSE-KMS is straight forward. You need to select the key from the drop-down list:

SSE-C
The main difference between SSE-KMS and SSE-C is the responsibility for storing and managing keys. While in SSE-KMS, key management is handled by AWS. In SSE-C, the customer handles it. The cryptographic operations are based on a user-defined key. The customer provides the keys and sends it to the server in any request.
The cryptographic operations, such as encryption before writing to disc and decryption when reading from the disk, are done by S3; the client should provide the key, while S3 does the rest.
The key is not stored by S3, only a randomly salted hash-based authentication code (HMAC) value, taken from the encryption key, is used to validate requests. The encryption key cannot be derived from the value that S3 stores because it is a signed combination of salted hash and some features from the encryption key.
Therefore, if the encryption key is lost, you cannot decipher the encrypted data; that is a risk while opting for this option.
Enforcing Server-Side Encryption by Policy
Bucket policy can assist in dictating server encryption by rejecting any PUT request without a header that indicates the encryption.
The policy below exemplifies the condition to reject PUT requests without server-side encryption in the header (the AES256 value enforces SSE-S3):
{
"Version":"2012-10-17",
"Id":"PutObjPolicy",
"Statement":[
{
"Sid":"DenyIncorrectEncryptionHeader",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::MyBucket/*",
"Condition":{
"StringNotEquals":{
"s3:x-amz-server-side-encryption":"AES256"
}
}
},
{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::MyBucket/*",
"Condition":{
"Null":{
"s3:x-amz-server-side-encryption":"true"
}
}
}
]
}
Similarly, the policy to apply the SSE-KMS encryption is:
{
"Version":"2012-10-17",
"Id":"PutObjPolicy",
"Statement":[
{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::MyBucket/*",
"Condition":{
"StringNotEquals":{
"s3:x-amz-server-side-encryption":"aws:kms"
}
}
},
{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::MyBucket/*",
"Condition":{
"Null":{
"s3:x-amz-server-side-encryption":"true"
}
}
}
]
}
We finished reviewing the three options of server-side encryption. Click here for further reading about server-side encryption. Let’s move to another encryption on the other side — the client.
Client-Side Encryption (CSE)
This is the fourth and last type of data encryption. As opposed to the other encryption methods above, the CSE is fundamentally different since the data is encrypted before reaching the server and decrypted only on the client-side, as opposed to server-side encryption in which the cryptographic operations are done on the server level, transparent to the consumer.
This is considered the most secure method of encryption, as the data is deciphered only on the client-side; no plain data leaves the server. However, the burden of managing and maintaining the encryption keys lies on the client. Moreover, since the data can be deciphered only on the client-side, any analytics cannot run on S3; that is the main drawback.
AWS provides two option to implement CSE:
- The customer master key (CMK) is stored in AWS KMS.
- The customer master key is stored on the client.
In the second option, the encryption keys are never sent to AWS, so it is crucial to manage them properly. In case a master key is lost, all data that was encrypted with it is lost and cannot be deciphered. Thus, if you do not have a proper way to manage the encryption key, such as HSM, it may be wiser to store the client keys in AWS KMS.
Part 3: Monitoring Security
This section describes four AWS services that can be used to monitor S3 security: AWS Config, Trusted Advisor, CloudTrail, and AWS Macie.
AWS Config
AWS Config provides configuration history and changes logs, which enable security governance. The compliance is determined against pre-defined rules. AWS provides an extensive set of rules per service; however, the user can create his own rules and apply them too.

The service holds an inventory for changes while it runs the rules automatically. It can be used for enabling auditing, security analysis, resource change tracking, and troubleshooting. The detailed information about the configuration changes and notifications are sent to CloudWatch, so alerts can be exposed to the relevant users or services. For example, the dashboard below presents the non-compliant buckets.

Trusted Advisor
Among its various features, the Trusted Advisor runs security checks, some are free and some are accessible only to users with an upgraded support level. It scans the account’s resources and raises alerts based on fixed rules. One of these rules is checking the existence of buckets with public access permissions. You can ask for a weekly email with all Trusted Advisor’s notifications, so such alerts will not be overlooked.

AWS CloudTrail
CloudTrail is a service that logs all API calls, successful or not. By that, this service can provide events history of your AWS account activities done by any actor (user, role, another AWS service). Not only that, it logs actions done via any tool or access, such as AWS Management Console, CLI, AWS SDKs, and other AWS APIs.
Defining a trail on S3 resource simplifies security analysis, resource change tracking, and troubleshooting; all activities are stored in a dedicated destination (as S3 bucket), which allows better analysis and understanding of the process flow.
You can define the trail over one or more S3 buckets, set its scope (management events and/or data events) and the operation (read and/or write). Keep in mind that the trail’s logs are stored indefinitely, so to avoid storage cost, you can define a bucket retention policy (delete or move to archive).
Browsing the S3 bucket that holds the trail reveals a hierarchy of Region/Year/Month/Day. The CloudTrail logs are stored under the Day folder. It’s very tedious to examine and analyze logs only by browsing them; therefore, AWS suggests using Amazon Athena (SQL-like over S3) service or raising CloudWatch events.
An active trail incurs cost, you can find more details in reading CloudTrail pricing page.
Since we mentioned buckets policy earlier in this blog-post, it’s useful to see the policy of the bucket that holds the trail; it allows CloudTrail service to put objects into the bucket:
{
"Version":"2008-10-17",
"Id":"MyBucketPolicy",
"Statement":[
{
"Sid":"AWSCloudTrailAclCheck20150319",
"Effect":"Allow",
"Principal":{
"Service":"cloudtrail.amazonaws.com"
},
"Action":"s3:GetBucketAcl",
"Resource":"arn:aws:s3:::my-logs"
},
{
"Sid":"AWSCloudTrailWrite20150319",
"Effect":"Allow",
"Principal":{
"Service":"cloudtrail.amazonaws.com"
},
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::my-logs/AWSLogs/path/*",
"Condition":{
"StringEquals":{
"s3:x-amz-acl":"bucket-owner-full-control"
}
}
}
]
}
Amazon Macie
AWS Macie is a security service that uses machine learning to discover automatically sensitive data that should be protected. Macie presents the risk value based on the content of data.
AWS Macie identifies sensitive information based on Regex (ID, license number, secret keys, etc.), file types (exchange file, office document, business documents), and themes (financial information, network scans, password). It also has a feature that classifies CloudTrail events and errors, looking for sensitive data.

AWS Macie exposes its information via alerts or dashboards. The user can run a query, filter the alerts based on risk, types, location, and dive into their details.

Using AWS Macie is an efficient way to scan the vast amount of data in your S3 buckets and surface risks. Although this service is not free, you might consider using it to mitigate data breached and enforce compliance.
Wrapping Up
If I have to choose one significant takeaway from this blog-post, it should be the necessity to vet your S3 security monitoring. The way permissions and policies have been defined originally may not be valid anymore and can lead to data breaches. Although I encourage you to relook at your data policies, it may not be realistic, and thus implementing monitoring is imperative.
Although I strived to write a comprehensive blog, other topics were not covered here, such as controlling the access to S3 via VPC endpoint or implementing data encryption using AWS SDK. Anyway, this writing is long enough, so it will wait for other posts.
I hope you find this blog-post useful, thanks for reading.
Until next time, keep on clouding ⛅.
— Lior
Further Reading
Opinions expressed by DZone contributors are their own.
Trending
-
Transactional Outbox Patterns Step by Step With Spring and Kotlin
-
Is Podman a Drop-in Replacement for Docker?
-
Security Challenges for Microservice Applications in Multi-Cloud Environments
-
WireMock: The Ridiculously Easy Way (For Spring Microservices)
Comments