{{announcement.body}}
{{announcement.title}}

Replicating Bitbucket Pipelines on Your Laptop for Local Debugging

DZone 's Guide to

Replicating Bitbucket Pipelines on Your Laptop for Local Debugging

This article demonstrates how to clone your commit and replicate a serverless deployment locally to quickly diagnose and debug issues.

· DevOps Zone ·
Free Resource

Bitbucket Pipelines is one of my favorite CI/CD tools, and I use it pretty heavily daily. Given the full range of use cases available for Pipelines, I have to frequently diagnose and debug new issues, and this process of debugging starts with being able to replicate the problems in my local environment quickly.

I’m going to walk you through the process of replicating a serverless deployment locally. For this exercise, I’ll use this serverless deployment as a reference.

Clone Your Commit

Like all good CI/CD build tools, Pipelines happily gives you the actual commit-ID which triggered a particular build. In the Pipelines dashboard, the link to the commit-ID is at the top left. In my case, the commit-id is 422c59b6f54c1a4e8xxxxxxxxxx. So once I’ve cloned my repository locally, I’m going to checkout this commit.

Shell
 




xxxxxxxxxx
1


 
1
git checkout 422c59b6f54c1a4e8xxxxxxxxxx 


Checking out the same commit-ID as the Pipelines build is vital. You don’t want to check out the branch, only to find that another developer updated it right after your build. Remember that a git commit-ID is a unique pointer to a change, so use it as intended and make sure you and your Pipelines are on the same commit.

Launch Your Pipelines Container

With the clone and checkout complete, go to your clone directory and launch the same Docker container as specified in your Pipelines.

To begin, cat your bitbucket-pipelines.yml to identify the container you’re using.

YAML
 






Launch the node:11.13.0-alpine container and mount your current directory:

Shell
 




x


 
1
docker run -v `pwd`:/mycode -it node:11.13.0-alpine /bin/sh 


Once the command runs successfully, you’ll be in the Docker shell. Change to the mounted directory to ensure the code is present.

Shell
 






Prepare Your Build Locally

The hard part is already over! Now it’s just a matter of executing the same build steps in this container. Only this time, we’re on our laptop instead of on Atlassian’s cloud!

In my container shell, I’m going to execute these commands:

Shell
 




x


 
1
/mycode # apk add python3; npm install -g serverless 


I’m skipping the serverless config steps because I won’t be deploying anything from my laptop. At least not directly, and not without a PR :)

Running Diagnostics

To debug my serverless deployment, I find two things handy.

First, serverless provides a debug-mode we can enable by setting an environment variable.

Shell
 




xxxxxxxxxx
1


1
/mycode # export SLS_DEBUG="*" 


With that done, we’ll package our serverless deployment without actually deploying it to the cloud. Sounds interesting, right? The command is as follows:

Shell
 




xxxxxxxxxx
1


 
1
/mycode # serverless package --package /tmp/myserverlesspackage 
2
 
          


This command will create the CloudFormation package for our project and dump the files inside /tmp/myserverlesspackage. Let’s go there now and see what we have:

Shell
 




x


1
/mycode # cd /tmp/myserverlesspackage
2
/mycode # ls
3
 
          
4
cloudformation-template-update-stack.json
5
cloudformation-template-create-stack.json
6
serverless-state.json
7
...


Nice! Our serverless package built successfully. As you can see, our CloudFormation template is right there.

What we’ve done so far might now seem like much immediately, but we have quite a lot of diagnostic information already:

  1. If there were any errors during the package step, then you’ve already eliminated Pipelines and everything that happens after it as a potential culprit. Narrowing down on problem areas is critical to diagnosing issues.
  2. We can build the previous version of our project and compare the sizes of the CloudFormation templates, which can be diagnostically significant. This process is a lot harder to do on Bitbucket, especially if your build is continuously failing.
  3. Replicating the environment locally also allows us to play with container versions. If you’ve been changing container versions and deploying Pipelines to test them, this will be a huge time-saver.
  4. You can now also check the memory and CPU consumption of your build container in a controlled environment. If your Pipelines have been complaining of resource constraints, you now have a way to test it.

Local debugging avoids the friction of triggering your Pipeline every time you make a change.

Which means easier debugging, better debugging, and overall happier engineers :)

Topics:
automation, bitbucket pipelines, debugging, devops, diagnostics, local deployment, serverless

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}