Packaging up application code in containers (for example a Docker container) has numerous benefits, including predictable & reproducible deploys across machines and environments. This portability ideally eliminates dependency issues, speeding up production release cycles or new machine provisioning times. The open source community has enjoyed these benefits that container management tooling brings, and the ecosystem has expanded enough recently to allow other platforms and frameworks to gain those benefits too. There’s a bunch of options with the tooling, including managing containers with the current popular choice Docker (or alternatives like rkt), iterating on the code locally then deploying the finished containers to production into environments like AWS Elastic Container Service or Google Container Engine.
In the .NET ecosystem, the new .NET Core 1.0 framework fully supports being run and deployed in containers. These containers can be run on Posix host operating systems like Linux and OS X, and in this post I’ll be showing the setup and workflow for containerizing a .NET Core codebase and running it on the latter. Specifically we’ll look at the required dependencies you’ll need installed locally, then we’ll create a new skeleton project with Yeoman, then Dockerize the project and run it up.
As a side note, we’re also getting native Windows Docker containers (as announced this week at https://blog.docker.com/2016/09/dockerforws2016) running on Windows Server 2016. These will be based off a couple of image flavors including
windowsservercore which has a near-complete userland (minus the GUI so no RDP) or alternatively
nanoserver which is a very minimal Windows API and is reportedly an order of magnitude smaller than the former.
The above is a bit bleeding edge for now so we’ll be focusing on the ‘traditional’ Posix Docker environment in this post. Let’s get to it!
There are a couple of links you’ll want to check out when beginning this process, including the official setup guide and the Docker .NET Core image repository. I initially tried the guide listed at the former, and while it mostly worked it required some tweaks to get the container running locally as a detached (daemonized) process. The latter lists the available base images as represented by their tags – the Dockerfile we’ll be creating later on will result in the creation of a custom image with your app code that is based off one of the official base images.
As .NET Core 1.0 was recently released and still under heavy development, you’ll probably see marked differences between versions. In general, you should be able to update the base image to the latest tag so it matches the precise version of .NET Core referenced in your
project.json, so if you’re reading this in the future and the current version is higher than 1.0.1, make sure to use the latest mainline release version.
First up, let’s install a couple of Yeoman generators so we can make a sample app and Dockerize it, as well as Docker itself so we can make a container out of the resulting project.
Grab these two Yeoman generators to make a new project and turn it into a container (if you already have your own .NET Core project, skip the -aspnet one):
npm install -g yo generator-aspnet generator-docker
If you already have your own application, you can skip this step. Otherwise, use the Yeoman generator above to create a small .NET Core sample app:
Then pick Web API Application for the simplest example. You can then
cd into its directory and run
dotnet restore; dotnet run to spin up the application. Browsing to http://localhost:5000/api/values will hit one of the routes and return some data.
Now, let’s turn it into a Docker container.
Adding a Docker Container to Your Project
This is done using this Yeoman generator:
Pick .NET Core as the project type, ‘rtm’ as the version, then the defaults for the rest and a name of your choice (preferably something short).
This will result in some bootstrapping scripts and config files, and two Dockerfiles – one for development and one for production. Each are based on one of three image types which encapsulate a bare Debian image plus some or all of the .NET Core dependencies, optimized for various scenarios. Here’s the gist of each:
1.0.0-preview2-sdk – contains the full Core SDK and CLI tools. Used for development/debugging/unit testing, and can also be run in build environments. This one’s larger in size (~500 MB), but Docker’s image layering results in this not being duplicated (i.e it’s efficient) for multiple containers.
1.0.0-preview2-onbuild – optimized for build environments. This is based of the above ‘sdk’ version, and adds copying of your app code, runs
dotnet restore and has an ENTRYPOINT command to start your web server when the container is run.
This one is the easiest to get working, and we’ll be using it in our modified Dockerfile below.
This one just contains the OS plus the native .NET Core dependencies, and is thus the smallest (160 MB). There’s also a ‘-core’ version which is designed to take the output of
dotnet publish (for portable .NET Core applications). Either of these are optimized for use in production only.
Configuring the Dockerfile and Running Your App
As of the current .NET Core release (v1.0.1) and the base image version (1.0.0-preview2), we need to modify the standard Dockerfile to get it running seamlessly (in the background as a daemon/service, not in the foreground taking up a terminal). Change the contents of Dockerfile to this:
FROM microsoft/dotnet:1.0.0-preview2-onbuild WORKDIR/app COPY bin/Debug/netcoreapp1.0/publish/app ENV ASPNETCORE_URLS http://*:5000 EXPOSE5000 ENTRYPOINT/bin/bash-c"dotnet YourAppName.dll"
If the latest version has moved on, update the first line to reference it, and change ‘YourAppName’ on the last line to enter the name you typed into the Yeoman generator.
Having done that, we can publish our application to create its artifacts:
Then build our custom Docker image that includes those artifacts:
docker build -t YourAppName .
And finally, run up the container!
docker run -d -p 5000:5000 YourAppName
The -d switch sets it to run in detached mode (in the background), and the -p switch opens the port specified in the Dockerfile to the outside machine. That should also match the app’s port. Once again you can browse to http://localhost:5000/api/values and see the output from your running app inside its container.
Managing Container Lifecycle
Here are a couple of useful commands to deal with containers:
Show created images:
Show all containers:
docker ps -a
Show running containers:
Stop a running container:
docker stop id where id is an ID from the above command
That’s it for Now!
Having done that you now have a container that can be iterated on during development or deployed up to a container service for running/auto-scaling, for instance using Elastic Container Service (ECS). This offers more efficient use of resources compared to deploying a set of containers onto a shared host (plain EC2). If you’re creating a service which can respond horizontally to load this process is also somewhat more streamlined/automated than manually creating image templates and Auto-Scaling Groups, but that’s a topic for another time.
I hope you found this post informative – also note that Raygun has a dedicated .NET Core provider in Raygun4NET (available from NuGet) that can capture your app-level exceptions and track them automatically, ensuring you never miss bugs. After you’ve added Docker to your project, add Raygun to gain the full power of next-gen DevOps workflow. Until next time, happy containerizing and error blasting!