There are cases where storage like S3 is ideal for your data, but in some cases, it's not an option to use AWS. So is the user out of luck, or are there some other services or solutions that could be used? Yes, luckily, there are solutions to use S3-like storage without using AWS. In this blog, I'll introduce a service called Wasabi, which offers S3-compatible storage and makes some quite bold claims:
Wasabi is 1/5th the price and 6x faster than Amazon S3
Sounds pretty good to me!
There's also the possibility to run a self-hosted S3-like solution using Minio, which even has some community-developed stacks to install it.
$ kontena stack reg search minio NAME VERSION DESCRIPTION jakolehm/minio 0.2.0 Distributed Minio (S3 compatible object storage server) matti/minio 0.1.0 jakolehms minio with health_check, wait_for_port, https force
Of course, the first question is how do I integrate Wasabi, or any other S3-compatible service, with my containers? The simplest way is to utilize Docker's volume plugin mechanism so you can mount the data buckets as volumes for your containers. To do this, we'll need to install a volume plugin to manage the integration with Wasabi. For that, I'll use a driver called RexRay, and naturally, the
s3fs flavor of it.
Of course, it can also be done directly, without volumes, using some S3-capable libraries in your app. However, the benefit of using volumes is that you separate the application from the storage because, when using volumes, the storage is kind of abstracted away. Your app stores the data into some specific directory (in the container) and how that is mounted and mapped to some external storage is not the app's problem. Volumes also allow you to easily change the storage "backend" based on different requirements.
Setting Up Wasabi
As with AWS S3, or any other AWS service, it's not really a good idea to use your root account to service integrations. So we need to create an IAM user that we'll use to access Wasabi buckets from our nodes using the RexRay driver.
Once the IAM user is created, grab the keys, as you'll need them in the next steps.
Setting Up RexRay for Docker
I'm going to use the
rexray/s3fs driver to integrate my Docker engines with Wasabi. I can use the same driver I'd use for AWS S3, as the Wasabi API is fully compliant with S3.
As we're about to set up the driver using the new Docker plugin mechanism, make sure you're running Docker version 1.13+. If you're running CoreOS, it means that you might have to update the OS, as the Docker version supporting plugins is shipped only from version 1576.4.0 (released at December 6, 2017) onwards.
To install rexray/s3fs for Docker, use the following command:
docker plugin install rexray/s3fs \ S3FS_ACCESSKEY=YOUR_ACCESS_KEY \ S3FS_SECRETKEY=YOUR_SECRET_KEY \ S3FS_ENDPOINT=https://s3.wasabisys.com \ S3FS_OPTIONS=url=https://s3.wasabisys.com
Naturally, replace the keys with the ones you grabbed from the Wasabi web console when you created the IAM user.
Also, the last
S3FS_OPTIONS is really needed to configure rexray/s3fs properly. Without it, rexray/s3fs behaves really oddly.
Now your Docker engine should see all the buckets on Wasabi as usable volumes.
Starting from the 1.2 release, Kontena supports management of volumes, as well. This means that you can now use rexray/s3fs volumes, stored as Wasabi buckets, in your Kontena stacks. Remember that volume scoping affects ] volume naming, and, in this case, bucket creation and data sharing:
- scope: instance; each service instance will get its own bucket to store data
- scope: stack; each stack will get its own bucket to store data, and services within the same stack will share buckets and thus the data also
- scope: grid; each grid will get only one bucket, thus all services using the same volume will share the same data
To deploy a service that stores data on Wasabi buckets, I'll use a demo Node.js app that handles file uploads. The uploads are then stored on the volume and, thus, also automatically shared to the Wasabi bucket.
Creation of Volumes
Now when (some of) the nodes are running the RexRay plugin, to integrate Docker volumes with the Wasabi service, we can create the volume definition on Kontena:
kontena volume create --driver rexray/s3fs --scope stack uploads
I'm using the scope
stack so that each of the services in the same stack will mount to the same bucket. In this case, there's only one service in the stack, but it also means that each of the instances will mount to the same bucket.
Deploy the Services
As always, we'll deploy the services as a Kontena stack. First, make sure you have at least one public facing load balancer service setup in your platform. If not, I'd highly suggest setting up one using:
kontena stack install kontena/ingress-lb
That'll deploy Kontena the load balancer in daemon mode (one instance per node) for your platform.
To install the Node.js file upload sample, use:
$ kontena stack install jussi/nodejs-file-upload > How many upload instances : 3 > Domain for the service : files.kontena.works > Choose a loadbalancer ingress-lb/lb [done] Creating stack nodejs-file-upload [done] Triggering deployment of stack nodejs-file-upload [done] Waiting for deployment to start [done] Deploying service upload
I'm using the handy variables to make the stack highly re-usable and not tied into any specific environment.
To test the service, head over to the domain you gave during the stack installation with your browser, given that you have that DNS pointing to the load balancer you selected during the installation.
Click the button to select a file to upload.
Once the file has finished uploading, you can check that it actually got stored in the bucket by navigating to the bucket on the Wasabi side.
You'll also notice that when Kontena created the volume during deployment, using the RexRay driver, it named the bucket
nodejs-file-upload.uploads. This is because we defined the scope as
stack, so you can re-use the same bucket for all services and instances in the stack. This way, I could install the same stack with a different name, say
demo, to the same platform. In that case, it creates a new bucket called
demo.uploads so that my stacks don't interfere with each other.