Introducing Jet, Codeship’s Platform for Docker
Introducing Jet, Codeship’s Platform for Docker
Reduce the amount of extra work you do around deployments, and stick with your current Docker workflows. Sounds like a very useful service.
Join the DZone community and get the full member experience.Join For Free
Do you need to strengthen the security of the mobile apps you build? Discover more than 50 secure mobile development coding practices to make your apps more secure.
At Codeship, we’ve been working on CI as a service for over five years now. Over those years, we’ve seen many different needs for the build environment that customers want to set up and how they want to manage their workflows. Our goal always was and is that we want to make every team, from large to small, more productive in their work and more focused on what actually makes a difference for their business.
Over the last 12 months, we’ve been working on our new infrastructure that gives all of our users an incredible amount of flexibility for the build environment and workflow setup. Today, we’re officially launching Jet, our new platform for Docker on Codeship.
We’ve built the new system on top of Docker so that teams using Docker can very easily integrate our system with their development workflow. We also made sure that teams who aren’t using Docker at the moment will have a very easy time getting started. The learning curve is fairly flat, but the power of the system can fulfill the needs of large and complex infrastructures as well.
In this post, I will introduce our intention for the new infrastructure as well as the future direction we want to take with Codeship now that we have this infrastructure in place. In a follow-up post, we’ll introduce the details on how the infrastructure works. You can also join our webinar next week, where I will go through a demo and answer any upcoming questions about Codeship’s new system.
Flexible Environments to Match Your Complex Needs
The classic Codeship infrastructure has many advantages for teams. All tools and languages are already installed so you can get started right away.
The downside though is that you don’t have the same level of control over what is installed as you have in your own system. So while we can match many different configurations, we sometimes couldn’t fully match what you run in production with our previous system.
After many discussions with customers and internal planning, we decided we would need to fully rewrite our build system to give you the utmost control. Control here is not just about which software is installed but also how the infrastructure is set up. Running a complex integration test that includes many different pieces of software and containers should be as easy to set up as a simple unit test run.
For that, we allow you to build your own containers through Docker and Dockerfiles or use existing containers from any registry, including the official images from the Docker Hub. You can then use those containers and link them together in the
codeship-services.yml file in your repository to set up your build infrastructure through a syntax that follows Docker Compose. You can read all about the syntax in our documentation.
By simply linking together those containers, you can build very complex systems for your build with a simple syntax.
The following is a sample configuration file that sets up a container with Ruby installed through the Dockerfile and with Redis and Postgres connected to it. Redis and Postgres are then available from the app container so you can run your tests.
app: build: image: codeship/rails-app dockerfile_path: Dockerfile links: - redis - postgres redis: image: redis:2.6.17 postgres: image: postgres:9.3.6
In the next blogpost as well as our webinar, we’ll go into more detail on different workflows and complex setups that you can run. Now that we have the build environment set up, we want to run our actual build steps.
A Workflow System for All Your Needs
We run builds for lots of different customers with widely different needs, so we decided to make our workflow system more generic. That way, you can build any complex workflow with it. At the same time, it’s very easy to understand and get started, and any developer on your team should be able to pick it up quickly.
You set up the workflow through the
codeship-steps.yml file in your repository. Those steps can be nested to build up a graph that can include parallelization and dependencies between steps as in the following graph:
The following is a simple example of a workflow file. It uses the previously defined app container to run three parallel steps that all run a different part of the test suite.
- type: parallel encrypted_dockercfg_path: dockercfg.encrypted steps: - service: app command: ./script/ci/ci.parallel spec - service: app command: ./script/ci/ci.parallel plugin - service: app command: ./script/ci/ci.parallel qunit
All of the steps are running on the same AWS instance, but in separate and isolated containers. Only your build is running on that instance.
Running them on a dedicated instance for the build increases the security. Even if somebody is able to break out of the Docker container and take over the host machine, they won’t have access to any code or secrets from another company. We’re not reusing machines for builds of other customers.
On top of that, this allows us to give you the choice of machine size you want to use. From a 2-core, 3 GB ram machine up to 32 cores and 60 GB of ram, you can parallelize and make your builds incredibly fast. We don’t limit the number of parallel steps you can have in your configuration file; the only limitations are the resources available in the AWS instance you selected as part of your subscription.
This generic workflow model allows you to mix and match testing and deployment. It also makes sure that any features we implement for the workflow system are available for running tests, deployment, or any other part of your build workflow.
Now that we have the build environment and workflow configuration stored in files in our repository, we should be able to run the whole build on our development machines as well.
Run the Build on Your Machine
Being able to run the whole build on your local machine has many advantages. For example, you can easily configure or debug your build. If you want to set up a more complex workflow, it’s painful to have to commit, push to the repo, see what happens, and then do it again.
Instead with Codeship, you can use our CLI tool and run the whole build on your local development machine. It simply uses Docker on your machine to run in the same environment that you run the build in on Codeship. This cuts down the time when setting up and experimenting with your build so your team can be more productive.
This also solves another goal that we had in mind for this new infrastructure, making sure your CD process is up 100 percent of the time.
Getting to 100 Percent Uptime
Continuous delivery is such an important element for many teams and all of our customers that anything below 100-percent availability of the process is a no-go.
Now as much time as we put into this and as good as our track record is, there are many different services involved in running Codeship. They can all impact the process, even without us having any control, such as GitHub having an outage, for example.
Our CLI tool makes sure that even in those times you’re able to deploy from the same environment that you’ve set up to run on Codeship. You can use the tool together with encrypted environment variables stored in your repository to run the build on the local machine of a developer and still go through with the deployment.
As you’re running in the same environment, you can still be sure that if the process passes, you can trust the deployment in the same way as if it ran on Codeship.
The Future of Codeship
We’re very proud that after months of building the system and running it in production with early customers, we can finally launch our new Docker-based system. With this new system, we’re giving our customers an unmatched level of flexibility for their build environment and workflow setup.
But this is only the first step in bringing better CI and CD to our customers. Now that we can support such a wide range of technologies and workflows, we’re looking into better ways to visualize and support complex team workflows.
Continuous delivery is not just a technology problem — it also needs to address the communication problem that comes with it. Only when the right people are notified at the right time (and the right people can include Product, Marketing, Sales, and Support) will continuous delivery really make all of your company more productive, not just your engineering team.
We’re very excited about what all of this means for the future of Codeship. If you want to get a deeper dive into the new project configuration, check out the follow-up post that we’re going to release in the next days and join us for our webinar. Also be sure to take a look at the Codeship Docker documentation.
Published at DZone with permission of Florian Motlik , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.