DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Building a Scalable ML Pipeline and API in AWS
  • Streamlining AWS Lambda Deployments
  • Building Scalable Data Lake Using AWS
  • Breaking AWS Lambda: Chaos Engineering for Serverless Devs

Trending

  • System Coexistence: Bridging Legacy and Modern Architecture
  • Optimizing Integration Workflows With Spark Structured Streaming and Cloud Services
  • Unlocking Data with Language: Real-World Applications of Text-to-SQL Interfaces
  • Driving DevOps With Smart, Scalable Testing
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Automating AWS Lambda Deployments Using Bitbucket Pipelines and Bitbucket Pipes

Automating AWS Lambda Deployments Using Bitbucket Pipelines and Bitbucket Pipes

Check out how you can integrate your favorite vendor-supplied pipeline using Bitbucket Pipes.

By 
Ayush Sharma user avatar
Ayush Sharma
·
Jun. 17, 19 · Tutorial
Likes (5)
Comment
Save
Tweet
Share
8.7K Views

Join the DZone community and get the full member experience.

Join For Free

Today we’ll talk about Bitbucket Pipes. It is a new feature in Bitbucket which can automate Lambda deployments on AWS.

So before we get our hands dirty, here’s a basic overview.

Lambda is the AWS-managed service running Functions-as-a-Service. Lambdas work like other managed services on AWS. We define a Python/Node/Java function and an API endpoint, and upload it to the Lambda service. Our function then handles the request-response cycle. AWS manages the underlying infrastructure resources for our function. This frees up time to focus on building our applications and not managing our infrastructure.

Bitbucket Pipelines is the Continuous Integration/Continuous Delivery pipeline integrated into Bitbucket. It works by running a sequence of steps after we merge or review code. Bitbucket executes these steps in an isolated Docker container of our choice. Here is my past tutorial on Pipelines deployments.

Bitbucket Pipes is the new feature we’ll test drive today. It is a marketplace of third-party integrations. A Pipe is a parameterized Docker which contains ready-to-use code. It will look something like this:

- pipe: <vendor>/<some-pipe>
  variables:
    variable_1: value_1
    variable_2: value_2
    variable_3: value_3


Pipes by AWS, Google Cloud, SonarCube, Slack, and others are available already. They are a way to abstract away repeated steps. This makes code reviews easier and deployments more reliable. And it lets us focus on what is being done rather than how it is being done. If a third-party pipe doesn’t work for you, you can even write your own custom pipe.

These are some of the providers that provide Pipes today:

Goal: Deploy a Lambda Using Pipes

So our goal today is as follows: we want to deploy a test Lambda function using the new Pipes feature.

To do this, we’ll need to:

  1. Create a test function.
  2. Configure AWS credentials for Lambda deployments.
  3. Configure credentials in Bitbucket.
  4. Write our pipelines file which will use our credentials and a Pipe to deploy to AWS.

Step 1: Create a Test Function

Let’s start with a basic test function. Create a new repo, and add a new file called lambda_function.py with the following contents:

def lambda_handler(a, b):

    return "It works :)"

Step 2: Configure AWS Credentials

We’ll need an IAM user with the AWSLambdaFullAccess managed policy.

Add this user’s access and secret keys to the Repository variables of the repo. Make sure to mask and encrypt these values.

Add the keys either at the Account level, the Deployment level, or the Repository level. You can find more information about these here.

Step 3: Create Our Pipelines file

Now create a bitbucket-pipelines.yml file and add the following:

pipelines:
  default:
    - step:
        name: Build and package
        script:
          - apt-get update && apt-get install -y zip
          - zip code.zip lambda_function.py
        artifacts:
          - code.zip
    - step:
        name: Update Lambda code
        script:
          - pipe: atlassian/aws-lambda-deploy:0.2.1
            variables:
              AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
              AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
              AWS_DEFAULT_REGION: 'us-east-1'
              FUNCTION_NAME: 'my-lambda-function'
              COMMAND: 'update'
              ZIP_FILE: 'code.zip'


The first step: in the pipeline will package our Python function in a zip file and pass it as an artifact to the next step.

The second step: is where the magic happens. atlassian/aws-lambda-deploy:0.2.1 is a Dockerized Pipe for deploying Lambdas. Its source code can be found here. We call this Pipe with six parameters: our AWS credentials, the region where we want to deploy, the name of our Lambda function, the command we want to execute, and the name of our packaged artifact.

Step 4: Executing Our Deployment

Committing the above changes in our repo will trigger a pipeline for this deployment. If all goes well, we should see the following:

Wrapping It Up

With the above pipeline ready, we can use other Bitbucket features to improve it. Features like merge checks, branch permissions, and deployment targets can make deployments smoother. We can also tighten the IAM permissions to ensure it has access to only the resources it needs.

Using Pipes in this way has the following advantages:

  1. They simplify pipeline creation and abstract away repeating details. Just paste in a vendor-supplied pipeline, pass in your parameters, and that’s it!
  2. Code reviews become easier. Ready-to-use Pipes can abstract away complex workflows.
  3. Pipes use semantic versioning, so we can lock the Pipe version to major or minor versions as we choose. Changing a Pipe version can go through a PR process, making updates safer.
  4. Pipes can even send Slack and PagerDuty alerts after deployments.

And that’s all. I hope you’ve enjoyed this demo. You can find more resources below.

Happy coding!

Resources

  • Bitbucket Pipes announcement.
  • Bitbucket Pipes documentation and feature demo.
  • Bitbucket Pipes repository.
  • Variables in Pipelines.
  • Source code for AWS Lambda Pipe by Atlassian.
  • IAM Managed Policies.
AWS Pipeline (software) AWS Lambda

Published at DZone with permission of Ayush Sharma, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Building a Scalable ML Pipeline and API in AWS
  • Streamlining AWS Lambda Deployments
  • Building Scalable Data Lake Using AWS
  • Breaking AWS Lambda: Chaos Engineering for Serverless Devs

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!