DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Optimizing Pgbench for CockroachDB Part 2
  • Optimizing Pgbench for CockroachDB Part 1
  • Using CockroachDB Workloads With Kerberos
  • Scaling Microservices With Docker and Kubernetes on Production

Trending

  • Memory Leak Due to Time-Taking finalize() Method
  • Integrating Model Context Protocol (MCP) With Microsoft Copilot Studio AI Agents
  • Metrics at a Glance for Production Clusters
  • Manual Sharding in PostgreSQL: A Step-by-Step Implementation Guide
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Running CockroachDB With Docker Compose and Minio - Part 2

Running CockroachDB With Docker Compose and Minio - Part 2

In this part of a series of tutorials, we're building a microservice architecture with CockroachDB writing changes in real-time to an S3 bucket in JSON format.

By 
Artem Ervits user avatar
Artem Ervits
DZone Core CORE ·
Jan. 01, 22 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
6.5K Views

Join the DZone community and get the full member experience.

Join For Free

CockroachDB, Docker Compose, and Minio

This is my second post on creating a multi-service architecture with docker-compose. We're building a microservice architecture with CockroachDB writing changes in real-time to an S3 bucket in JSON format. S3 bucket is served by a service called Minio. It can act like an S3 appliance on-premise or serve as a local gateway to your cloud storage.

You can find the first post here.

  • Information on CockroachDB can be found here.
  • Information on Docker Compose can be found here.
  • Information on Minio can be found here.
  1. Add minio service to the docker-compose.yml file

To get started with Minio container, the easiest step is to just look at their quick-start guide

docker pull minio/minio
docker run -p 9000:9000 minio/minio server /data


This pulls the latest stable Minio container, maps a host port 9000 to the Minio container and starts the container mapped to a volume called /data.

  1. Add minio as a service to our existing docker-compose.yml file:
minio:
   image: minio/minio
   environment:
     - MINIO_ACCESS_KEY=miniominio
     - MINIO_SECRET_KEY=miniominio13
     - MINIO_REGION_NAME=us-east-1
   ports:
     - "9000:9000"
   command: server /data
   volumes:
     - ${PWD}/data:/data


One neat trick I just learned is to name your service with container_name, that way we can reference the containers by name and not search docker ps for container ID. So let's add the property to crdb and minio respectively.

 crdb:
   image: cockroachdb/cockroach:v21.2.3
   container_name: crdb-1
...

 minio:
   image: minio/minio
   container_name: minio


The whole file should look like so:

version: '3.9'

services:

 crdb:
   image: cockroachdb/cockroach:v21.2.3
   container_name: crdb-1
   ports:
     - "26257:26257"
     - "8080:8080"
   command: start-single-node --insecure
   volumes:
     - "${PWD}/cockroach-data/crdb:/cockroach/cockroach-data"

 minio:
   image: minio/minio
   container_name: minio
   environment:
     - MINIO_ACCESS_KEY=miniominio
     - MINIO_SECRET_KEY=miniominio13
     - MINIO_REGION_NAME=us-east-1
   ports:
     - "9000:9000"
   command: server /data
   volumes:
     - ${PWD}/data:/data


You can now access the container by name, instead of ID!

Minio:

docker exec -it minio bin/sh


CockroachDB:

docker exec -it crdb-1 bash


Invoking bash varies based on the base image container is using, hence the difference between bin/sh and bash respectively.

I appended -1 to CockroachDB because it's built to be a multi-node database. It is assumed that there will be multiple containers running Cockroach and each can have their own naming convention. Running a single node of Cockroach is an antipattern and should be used cautiously.

  1. Start the docker-compose with:
docker-compose up -d
Creating network "crdb-compose_default" with the default driver
Pulling minio (minio/minio:)...
latest: Pulling from minio/minio
e7c96db7181b: Pull complete
c5a27a4b3b58: Pull complete
fe4a797b2726: Pull complete
Digest: sha256:60211bbb12326e52f7a20e91ca7b41145f9269603de0a347fcee8f0817caf39e
Status: Downloaded newer image for minio/minio:latest
Creating minio  ... done
Creating crdb-1 ... done


We covered how to view the logs and run in the background before, this time because we have two services we can reference each service to zero in on those logs specifically. Naming the container as we did earlier comes in handy, doesn't it?

docker-compose logs minio
Attaching to minio
minio    | Attempting encryption of all config, IAM users and policies on MinIO backend
minio    | Endpoint:  http://192.168.128.3:9000  http://127.0.0.1:9000
minio    |
minio    | Browser Access:
minio    |    http://192.168.128.3:9000  http://127.0.0.1:9000
minio    |
minio    | Object API (Amazon S3 compatible):
minio    |    Go:         https://docs.min.io/docs/golang-client-quickstart-guide
minio    |    Java:       https://docs.min.io/docs/java-client-quickstart-guide
minio    |    Python:     https://docs.min.io/docs/python-client-quickstart-guide
minio    |    JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
minio    |    .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide


We can now talk about the specifics of this deployment. You're already familiar with general structure from the previous post, let's cover the new additions. Minio can act as a local S3 appliance and I'm using the prerequisite properties to override defaults with MINIO_ACCESS_KEY=miniominio, MINIO_SECRET_KEY=miniominio13 and optionally specifying regions with MINIO_REGION_NAME=us-east-1. 

I will need the latter in the next tutorial. Feel free to read the Minio docs for additional properties. The goal of my tutorial is to set up a local S3 where I'm going to sync CDC, short for Change Data Capture from CockroachDB in the future article. Today, we are only focusing on the Minio set up. Because minio container allows us to override these properties with environment variables, we populate the compose file with the following:

environment:
     - MINIO_ACCESS_KEY=miniominio
     - MINIO_SECRET_KEY=miniominio13
     - MINIO_REGION_NAME=us-east-1


We are already familiar with the volume and ports directives. Command to start Minio service is server /data, which we mapped to a local directory within the root of our project. Finally, let's talk about the Minio service itself. From the log's output, you can see that you can access the web UI using a browser, http://192.168.128.3:9000  http://127.0.0.1:9000, feel free to open your favorite browser and navigate to http://127.0.01:9000 and browse to your heart's content. You can create buckets, upload and download files to and from the bucket as you would with a typical object-store. The root of your Minio volume is /data.

Feel free to read the Minio docs for additional properties. The goal of my tutorial was to set up a local S3 to sync CDC short for Change Data Capture from CockroachDB. Now that we have a foundation for a database writing to an S3 bucket, we're ready to put the architecture to good use in the next article.

Hope you enjoyed this post and Happy Holidays!

MinIO CockroachDB Docker (software)

Published at DZone with permission of Artem Ervits. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Optimizing Pgbench for CockroachDB Part 2
  • Optimizing Pgbench for CockroachDB Part 1
  • Using CockroachDB Workloads With Kerberos
  • Scaling Microservices With Docker and Kubernetes on Production

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!