{{announcement.body}}
{{announcement.title}}

How to Give Access to AWS Resources Without Creating 100s of IAM Users

DZone 's Guide to

How to Give Access to AWS Resources Without Creating 100s of IAM Users

This post demonstrates the use of AWS Security Token Service to give access to AWS Resources to users that don't exists in AWS IAM.

· Cloud Zone ·
Free Resource

Scenario

Imagine you are a solution architect in a company with 100s of sales employees and you are migrating from on-premise to AWS Cloud. You want to use an existing employee authentication system and you want to store files on S3 that employee uses in their day to day work. You don't want to keep the S3 bucket public, which will expose all files to everybody. You have 2 options:

  1. Create a single role with S3 bucket access and login credentials for all of the employees. With this users have to use 2 different logins. One to access their existing system and other to access S3 files.

  2. Use AWS Security Token Service (STS) to assume role with S3 access and use that to give access to the files. Users will still authenticate with their existing system.

In this post, we will explore and implement option # 2. Please note that we are building this example on top of previous post.

About Security Token Service (STS)

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate. You can use AssumeRole action on STS that returns a set of temporary security credentials that you can use to access AWS resources that you might not normally have access to. These temporary credentials consist of an access key ID, a secret access key, and a security token. Typically, you use AssumeRole within your account or for cross-account access. In our case, we are using AssumeRole within same account.

How to Setup Users and Roles?

In our case, we are going to create a role called S3ReadOnlyAccessAssumeRole. As the name suggests it has only S3 read access policy. We will also create a Trust Policy and attached to this S3 role. Trust policy will allow this role to be assumed by our lambda execution role. Here is how our SAM will look like for this role.

YAML
 




x
15


1
IamS3Role:
2
    Type: AWS::IAM::Role
3
    Properties: 
4
      AssumeRolePolicyDocument:
5
       Version: 2012-10-17
6
       Statement:
7
          - Effect: Allow
8
            Principal:
9
              AWS: !GetAtt ShowFilesFunctionRole.Arn
10
            Action:
11
              - 'sts:AssumeRole'
12
      Description: 'Readonly S3 role for lamdba to Assume at runtime'
13
      ManagedPolicyArns: 
14
      - arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
15
      RoleName: S3ReadOnlyAccessAssumeRole


In the above snippet, the AssumeRolePolicyDocument attribute specifies the trust policy that allows Lamdba execution role identified by principle AWS: !GetAtt ShowFilesFunctionRole.Arn to AssumeRole to which this policy is attached to S3ReadOnlyAccessAssumeRole. The ManagedPolicyArns attribute species the policy for S3ReadOnlyAccessAssumeRole that allows read-only access to S3 buckets.

Lambda Handler

Now, let's write our Lamdba handler that will use this role. Here is the SAM configuration.

YAML
 




xxxxxxxxxx
1


1
Origin:
2
    Type: String
3
    Default: https://stsexamplebucket.s3.us-east-2.amazonaws.com
4
  FilesBucket:
5
    Type: String
6
    Default: s3-sales-rep


YAML
 




xxxxxxxxxx
1
36


1
ApiGatewayShowFilesApi:
2
    Type: AWS::Serverless::Api
3
    Properties:
4
      StageName: Prod
5
    Auth:
6
     UsagePlan:
7
      CreateUsagePlan: PER_API
8
      Description: Usage plan for this API
9
      Quota:
10
       Limit: 500
11
       Period: MONTH
12
      Throttle:
13
       BurstLimit: 100
14
       RateLimit: 50
15
  ShowFilesFunction:
16
    Type: AWS::Serverless::Function
17
    Properties:
18
      Environment:
19
        Variables:
20
          userTable: !Ref myDynamoDBTable
21
          s3role: !GetAtt IamS3Role.Arn
22
          origin: !Sub ${Origin}
23
          filesBucket: !Sub ${FilesBucket}
24
      CodeUri: Lambda/
25
      Handler: showfiles.lambda_handler
26
      Runtime: python3.8
27
      Policies:
28
        - DynamoDBCrudPolicy:
29
            TableName: !Ref myDynamoDBTable      
30
      Events:
31
        getCounter:
32
          Type: Api
33
          Properties:
34
            Path: /showFiles
35
            Method: GET
36
            RestApiId: !Ref ApiGatewayShowFilesApi


We are defining here a couple of parameters that will be set as environment variables for Lambda. With Origin we specifying the origin domain for CORS. It is our S3 bucket's Virtual hosted style URL. FilesBucket is the bucket where files are stored. In the Serverless Function definition, it uses showfiles.py as a lambda handler. The function has permissions to use DB. We also create API for this lambda with path /showFiles.

Lets see what we do in the Lambda handlers. We modified login.py lambda handler from previous post. We are setting a cookie once the user is authenticated. This is completely optional and really not required for STS to work but you might need some kind of system to identify that user is already authenticated.

Python
 




xxxxxxxxxx
1
27


1
if decryptedPass == pwd : 
2
      token = secrets.token_hex(16)  
3
      response = table.update_item(
4
        Key={
5
            'userid': uname
6
        },
7
        AttributeUpdates={
8
            'token': {
9
                'Value': token,
10
            }
11
            }
12
        )
13
 
          
14
      return {
15
        'statusCode': 200,
16
        'headers':{
17
             'Set-Cookie':'tkn='+uname+'&'+token+';Secure;SameSite=None;HttpOnly;Domain=.amazonaws.com;Path=/',
18
             'Content-Type': 'text/html'
19
         },
20
        'body': '<html><head><script>window.location.href = \''+ os.environ['showFilesUrl']+'\' </script></head><body>Hello</body></html>'
21
      }
22
    else:
23
     response['status'] = 'Login Failed'
24
     return {
25
        'statusCode': 200,
26
        'body': json.dumps(response) 
27
      }


When the user submits the username and password, Login lambda handler will authenticate the user, store the unique id in DB, set the cookie in response with location to showFiles url html page from S3 bucket. On page load, browser will change the location to showfiles url that will trigger the showFiles Lambda handler defined in showFiles.py.

showFiles.html


HTML
 




xxxxxxxxxx
1
38


1
<html>
2
<head>
3
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.4.1/semantic.min.css" integrity="sha512-8bHTC73gkZ7rZ7vpqUQThUDhqcNFyYi2xgDgPDHc+GXVGHXq+xPjynxIopALmOPqzo9JZj0k6OqqewdGO3EsrQ==" crossorigin="anonymous" />
4
<script
5
  src="https://code.jquery.com/jquery-3.1.1.min.js"
6
  integrity="sha256-hVVnYaiADRTO2PzUGmuLJr8BLUSjGIZsDYGmIJLv2b8="
7
  crossorigin="anonymous"></script>
8
 
          
9
<script src="https://cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.4.1/semantic.min.js"></script>
10
</head>
11
<body>
12
 
          
13
<div class="ui raised very text container">
14
<h1 class="ui header">File Access System</h1>
15
<i class="folder open icon"></i></i><div class="ui label">Files</div>
16
<div id="files" >Loading..</div>
17
</div>
18
</body>
19
<script>
20
 
          
21
fetch("https://9nimlkmz74.execute-api.us-east-2.amazonaws.com/Prod/showFiles/", {
22
  credentials: 'include'
23
})
24
  .then(response => response.text())
25
  .then((body) => {
26
    var files="";
27
    var obj =  JSON.parse(body)
28
    for (i = 0; i < obj.length; i++) {
29
            files =  files+ "<i class='file alternate outline icon'><a href='#'>&nbsp;&nbsp;"+obj[i]+"</a>"
30
    }
31
    document.getElementById("files").innerHTML= files
32
  })
33
  .catch(function(error) {
34
    console.log(error); 
35
  });
36
 
          
37
</script>
38
</html>


We call the showFiles API, that gets the list of Files from S3 bucket and display on the page.

showFiles.py

Python
 




xxxxxxxxxx
1
83


1
import json
2
import logging
3
import boto3
4
import os
5
 
          
6
log = logging.getLogger()
7
log.setLevel(logging.INFO)
8
 
          
9
#retuns login cookie information userid and unique token
10
def getLoginCookie(cookies):
11
    data ={}
12
    for x in cookies:
13
      keyValue = x.split('=')
14
 
          
15
      if keyValue[0].strip() =='tkn':
16
        cookieValue = keyValue[1]
17
        tknvalues = cookieValue.split('&')
18
        data['uid']=tknvalues[0]
19
        data['tkn']=tknvalues[1]
20
      else:
21
        cookieValue =''
22
      return data
23
 
          
24
#verifies unique token that is saved in database vs in request      
25
def verifyLogin(data):
26
    dynamodb = boto3.resource('dynamodb')
27
    table = dynamodb.Table(os.environ['userTable'])
28
    response = table.get_item(Key={'userid': data['uid']})
29
    json_str =  json.dumps( response['Item'])
30
 
          
31
    resp_dict = json.loads(json_str)
32
    token = resp_dict.get("token")
33
    return bool(token == data['tkn'])
34
 
          
35
# Returns list of files from bucket using STS    
36
def getFilesList():
37
    sts_client = boto3.client('sts')
38
 
          
39
    # Call the assume_role method of the STSConnection object and pass the role
40
    # ARN and a role session name.
41
    assumed_role_object=sts_client.assume_role(
42
        RoleArn=os.environ['s3role'],
43
        RoleSessionName="AssumeRoleSession1"
44
    )
45
 
          
46
    # From the response that contains the assumed role, get the temporary 
47
    # credentials that can be used to make subsequent API calls
48
    credentials=assumed_role_object['Credentials']
49
 
          
50
    # Use the temporary credentials that AssumeRole returns to make a 
51
    # connection to Amazon S3  
52
    s3_resource=boto3.resource(
53
        's3',
54
        aws_access_key_id=credentials['AccessKeyId'],
55
        aws_secret_access_key=credentials['SecretAccessKey'],
56
        aws_session_token=credentials['SessionToken'],
57
    )
58
 
          
59
    bucket = s3_resource.Bucket(os.environ['filesBucket'])
60
    files=[]
61
    for obj in bucket.objects.all():
62
        files.append(obj.key)
63
    return files
64
 
          
65
 
          
66
def lambda_handler(event, context):
67
    headers = event.get("headers")
68
    cookies = headers['Cookie'].split(";")
69
    data = getLoginCookie(cookies)
70
    isVerified = verifyLogin(data)
71
 
          
72
    if(isVerified):
73
        response = getFilesList()
74
 
          
75
    return {
76
        'statusCode': 200,
77
        'headers': {
78
            'Content-Type': 'application/json',
79
            'Access-Control-Allow-Origin':os.environ['origin'],
80
            'Access-Control-Allow-Credentials': 'true'
81
        },
82
        'body': json.dumps(response)
83
    }


Focus on the lamdba_handler function here. We first get the cookie and verify the login. If verified we call the getFilesList function where the magic of STS happens. We get the arn of role to be assumed from Lambda environment variables. The assume_role function returns the credentials that contain the access key id, secret access key, and session token. You can use these credentials to get S3 resources and access the bucket. We create list of files as array and send it as response.

You can find full SAM template.yml and other code for this here github.com/rajanpanchal/aws-kms-sts

Before you run SAM deploy, create an S3 bucket that will store the HTML files. In our case, its stsexamplebucket. We use URLs from this bucket in our SAM Template. On SAM Deploy, it will generate output with 3 URLs for the APIs. Modify the HTML files to point to those URL and upload to the S3 bucket. Make sure you make those files public.

Testing

You need to create a bucket with the name specified in FilesBucket parameter in SAM template. This bucket will store the files for display.

Sample bucket

Summary

In summary, we used a custom identity broker that authenticates the user and then AWS STS that allows access to AWS resources to those users. You might wonder we could have given access to S3 to Lambda Execution role instead of using STS. Of course, you can and it will work. But the idea here is to use STS and you can use STS irrespective of Lambda in your other applications like Spring Boot, Java, and Python applications. In the next post, we will further extend this to use S3 signed URL to give access to files stored in S3.

Feel free to point out any suggestions, errors and give your feedback in comments!

Topics:
amazon web service, aws, aws solution architect certification, aws tutorials, hands-on learning

Published at DZone with permission of Rajan Panchal . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}