DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Keep Your Application Secrets Secret
  • Buildpacks: An Open-Source Alternative to Chainguard
  • Building a 24-Core Docker Swarm Cluster on Banana Pi Zero
  • How To Build a Multi-Zone Java App in Days With Vaadin, YugabyteDB, and Heroku

Trending

  • SaaS in an Enterprise - An Implementation Roadmap
  • Software Delivery at Scale: Centralized Jenkins Pipeline for Optimal Efficiency
  • Creating a Web Project: Caching for Performance Optimization
  • Understanding the Shift: Why Companies Are Migrating From MongoDB to Aerospike Database?
  1. DZone
  2. Coding
  3. Tools
  4. Raspberry Pi Cluster Emulation With Docker Compose

Raspberry Pi Cluster Emulation With Docker Compose

This guide discusses everything needed to build a simple, scalable, and fully binary compatible Raspberry Pi cluster using QEMU, Docker, Docker Compose, and Ansible.

By 
Sudip Sengupta user avatar
Sudip Sengupta
DZone Core CORE ·
Aug. 26, 20 · Tutorial
Likes (6)
Comment
Save
Tweet
Share
15.0K Views

Join the DZone community and get the full member experience.

Join For Free

Introduction

The Raspberry Pi is no longer just a low-cost platform for students to learn computing, it's now a legitimate research and development platform that's being used for IoT, networking, distributed systems, and software development. It's even being used administratively in production environments.

Not long after the first Raspberry Pi was released in 2012, several set out to build them into low-cost clusters, often for their research and testing purposes. Interns at DataStax built a multi-datacenter, 32 nodes Cassenda fault-tolerance demo, complete with a big red button to simulate the failure of an entire datacenter. David Guill built a 40-node Raspberry Pi Cluster that was intended to be part of his MSCE thesis. Balena, built "The Beast", a 120 node Raspberry Pi cluster, for scaled testing of their online platform. And on the extreme end of the spectrum, Oracle built a 1060 node Raspberry Pi Cluster, which they introduced at Oracle OpenWorld 2019.

Innovation with the Raspberry Pi continues as they are turned into everything from wi-fi extenders, security cameras, and even bigger clusters. While the main value of these clusters is inherent in their size and low cost, their popularity makes them an increasingly common development platform. Since the Raspberry Pi uses an ARM processor, this can make development problematic for those of us who work exclusively in the cloud. While commercial solutions exist, we will be building our own emulated cluster using a fully open source stack hosted on Google Compute Engine.

Use Cases

Other than learning from the experience, Dockerizing an emulated Raspberry Pi enables us to do three things. One, it turns into software that would otherwise be a hardware-only device that nobody has to remember to carry around (I'm always losing the peripheral cables). Two, it enables Docker to do for the Pi what Docker does best for everything else: it makes software portable, easy to manage, and easy to replicate. And three, it takes up no physical space. If we can build one Raspberry Pi with Docker, we can build many. If we can build many, we can network them all together. While we may encounter some limitations, this build will emulate a cluster of Raspberry Pi 1s that's logically equivalent to a simple, multi-node physical cluster.

Emulated Hardware Architecture

While technically not identical, the emulation software we will be using, QEMU, provides an ARM-Versatile architecture that's roughly compatible with what is found on a Raspberry Pi 1. Some modifications to the kernel are necessary in order for it to work properly with Raspbian, but for our purposes, it's one of the more stable open source solutions available.

Java
 




x
13


 
1
pi@raspberrypi:~$ cat /proc/cpuinfo 
2
processor       : 0 
3
model name      : ARMv6-compatible processor rev 7 (v6l) 
4
BogoMIPS        : 577.53 
5
Features        : half thumb fastmult vfp edsp java tls 
6
CPU implementer : 0x41 
7
CPU architecture: 7 
8
CPU variant     : 0x0 
9
CPU part        : 0xb76 CPU revision    : 7 
10

             
11
Hardware        : ARM-Versatile (Device Tree Support) 
12
Revision        : 0000 
13
Serial          : 0000000000000000



Compared to a Physical Raspberry Pi 1, they are nearly identical:

Java
 




xxxxxxxxxx
1
13


 
1
pi@raspberrypi:~ $ cat /proc/cpuinfo 
2
processor    : 0 
3
model name    : ARMv6-compatible processor rev 7 (v6l) 
4
BogoMIPS    : 697.95 
5
Features    : half thumb fastmult vfp edsp java tls 
6
CPU implementer    : 0x41 
7
CPU architecture: 7 
8
CPU variant    : 0x0 
9
CPU part    : 0xb76 CPU revision    : 7 
10

             
11
Hardware    : BCM2835 
12
Revision    : 000d 
13
Serial        : 000000003d9a54c5



Background

What Is QEMU?

QEMU is a processor emulator. It supports a number of different processors, but the only thing that we're interested in is something that can run Raspberry Pi images natively without lot of difficulties. In this case, we're going to be using QEMU 4.2.0, which supports an ARM11 instruction set that's compatible with the Broadcom BCM2835 (ARM1176JZFS) chip found on the Raspberry Pi 1 and Zero. We will use ARM1176 support on QEMU, which will allow us to more or less emulate a Raspberry Pi 1. I say more or less because we will still need to use a customized Raspbian kernel in order to boot on the emulated hardware. QEMU support for the Pi is still in development, so our approach to getting it to work here is just a clever hack that will by no means be optimal or efficient in terms of CPU utilization.

QEMU Features

QEMU supports many of the same features found in Docker, however, it can run full software emulation without a host kernel driver. This means that it can run inside Docker, or any other virtual machine, without host virtualization support. The QEMU feature list is extensive, and the learning curve is steep. However, the primary feature that we will be focused on for this build is host port forwarding so that data can be passed to the host.

Dockerized QEMU

One of Docker's strengths is that it doesn't handle full-fledged virtualization, but instead relies on the architecture of the host system. Since our host system will be running an Intel processor, we can't expect Docker to handle ARM operations on its own. So, we will be placing QEMU inside a Docker container. Since Docker is designed to run software at near-native performance, the operational efficiency challenge will be with QEMU itself. QEMU, on the other hand, supports emulations of a machine's architecture completely with software. The advantage this has is that it can run inside any virtualized system or container, independent of its system architecture. If patient, we could even run a Dockerized Raspberry Pi container inside another Dockerized Raspberry Pi container. The drawback to QEMU is that it has comparatively poor performance compared to other types of virtualization. But we can benefit from the best of both worlds by leveraging QEMU's ARM emulation while depending on Docker for everything else.

Raspbian

Based on Debian, Raspbian is a popular and well supported operating system for the Raspberry Pi, and is one of the most often recommended for the platform. The community is very active and well managed.

Physical Raspberry Pi Speed Comparison

The following tests are intended as a baseline for comparing our virtualized systems. Since we will be emulating a single-core, these tests are only single-core, single thread, regardless of how many physical cores are incorporated into the architecture.

Raspberry Pi 1 2011,12

Java
 




xxxxxxxxxx
1


 
1
Test execution summary:    total time:                          330.5514s    total number of events:              10000    total time taken by event execution: 330.5002    per-request statistics:         min:                                 32.92ms         avg:                                 33.05ms         max:                                 40.94ms         approx.  95 percentile:              33.24ms 
2
Threads fairness:    events (avg/stddev):           10000.0000/0.00    execution time (avg/stddev):   330.5002/0.00


Raspberry Pi 1 A+ V1.1 2014

Java
 




xxxxxxxxxx
1
13


 
1
Test execution summary:
2
    total time:                          328.7505s
3
    total number of events:              10000
4
    total time taken by event execution: 328.6931
5
    per-request statistics:
6
         min:                                 32.71ms
7
         avg:                                 32.87ms
8
         max:                                 78.93ms
9
         approx.  95 percentile:              33.03ms 
10

             
11
Threads fairness:
12
    events (avg/stddev):           10000.0000/0.00
13
    execution time (avg/stddev):   328.6931/0.00


Raspberry Pi Zero W v1.1 2017

Java
 




xxxxxxxxxx
1
13


 
1
Test execution summary:
2
    total time:                          228.2025s
3
    total number of events:              10000
4
    total time taken by event execution: 228.1688
5
    per-request statistics:
6
         min:                                 22.76ms
7
         avg:                                 22.82ms
8
         max:                                 35.29ms
9
         approx.  95 percentile:              22.94ms 
10

             
11
Threads fairness:
12
    events (avg/stddev):           10000.0000/0.00
13
    execution time (avg/stddev):   228.1688/0.00


Raspberry Pi 2 Model B v1.1 2014

Java
 




xxxxxxxxxx
1
13


 
1
Test execution summary:
2
    total time:                          224.9052s
3
    total number of events:              10000
4
    total time taken by event execution: 224.8738
5
    per-request statistics:
6
         min:                                 22.20ms
7
         avg:                                 22.49ms
8
         max:                                 32.85ms
9
         approx.  95 percentile:              22.81ms 
10

             
11
Threads fairness:
12
    events (avg/stddev):           10000.0000/0.00
13
    execution time (avg/stddev):   224.8738/0.00


Raspberry Pi 3 Model B v1.2 2015

Java
 




xxxxxxxxxx
1
13


 
1
Test execution summary:
2
    total time:                          139.6140s
3
    total number of events:              10000
4
    total time taken by event execution: 139.6087
5
    per-request statistics:
6
         min:                                 13.94ms
7
         avg:                                 13.96ms
8
         max:                                 34.06ms
9
         approx.  95 percentile:              13.96ms 
10

             
11
Threads fairness:
12
    events (avg/stddev):           10000.0000/0.00
13
    execution time (avg/stddev):   139.6087/0.00


Raspberry Pi 4 B 2018

Java
 




xxxxxxxxxx
1
13


 
1
Test execution summary:
2
    total time:                          92.6405s
3
    total number of events:              10000
4
    total time taken by event execution: 92.6338
5
    per-request statistics:
6
         min:                                  9.22ms
7
         avg:                                  9.26ms
8
         max:                                 23.50ms
9
         approx.  95 percentile:               9.27ms 
10

             
11
Threads fairness:
12
    events (avg/stddev):           10000.0000/0.00
13
    execution time (avg/stddev):   92.6338/0.00


Project Requirements

Single Host Specifications

Historically, QEMU has been single-threaded, emulating all cores of a system's architecture on a single CPU. While that's no longer the case, we are still going to be emulating a single core Raspberry Pi. We will do some benchmarks later to compare how different CPU limits on each node impacts performance. But for now, we will use one CPU per single-core node. Since QEMU has the potential to use a lot of CPU resources due to its inherent inefficiency, our initial three-node cluster will start with a baseline of at least one CPU per node, leaving one CPU dedicated to the host to avoid performance problems. The VM specs selected for this task are as follows.

Java
 




xxxxxxxxxx
1


 
1
Cloud Provider: Google Cloud Platform 
2
Instance Type: n1-standard-4 
3
CPUs: 4 
4
Memory: 15GB 
5
Disk: 100GB 
6
Operating System: Ubuntu 18.04 LTS


Docker

Installed on the host, we're also using the default version of Docker that is available on the default apt repository for Ubuntu 18.04 LTS.

# docker -v 
Docker version 18.09.7, build 2d0083d

Docker Compose

# docker-compose -v 
docker-compose version 1.25.0, build 0a186604

Docker Hub Ubuntu Image

18.04, bionic-20200112, bionic, latest

QEMU

Installed inside the Docker container, we will be using the following version of QEMU for ARM:

# qemu-system-arm --version 
QEMU emulator version 4.2.0 
Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers

QEMU Customized Kernel for Raspbian

Loaded from QEMU inside Docker, we will use Dhruv Vyas's compiled kernel for Raspbian, which has been modified to be usable with QEMU.

Raspbian Lite Image

Also booted from QEMU, we will use an unmodified version of Raspbian Lite from 9/30/2019.

Expect (Tcl/Tk)

Installed on the Docker container is the following version of Expect:

# expect -v expect version 5.45.4

ssh/sshd

sshd will need to be enable on each Raspbian node, and ssh should be enabled on the host.

Ansible

The following version of Ansible is also being used, along with its other dependencies:

Java
 




xxxxxxxxxx
1


 
1
# ansible --version 
2
ansible 2.5.1
3
  config file = /etc/ansible/ansible.cfg
4
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
5
  executable location = /usr/bin/ansible  python version = 2.7.17 (default, Nov  7 2019, 10:07:09) [GCC 7.4.0]


Building the Docker Images

QEMU Build Container

We will compile QEMU 4.2.0 from the source. It will need all the supporting build tools, so to keep our app container as small as possible, we will create a separate build container for the QEMU build using a minimal version of Ubuntu 18.04 from Docker Hub.

QEMU App Container

Once the QEMU is compiled from the source, we will transfer it to the app container. Also from Docker Hub, we will use the same minimal version of Ubuntu 18.04 to host the QEMU binary.

Docker Configuration

The Dockerfile

We will be using the following Dockerfile, which may also be found updated in this guide's accompanying pidoc repository on Github. Each code snippet below makes up a segment of the Dockerfile. Thanks goes to Luke Child's for his work on dockerpi.

Build stage for qemu-system-arm:

Dockerfile
 




x
18


 
1
FROM ubuntu AS qemu-system-arm-builder 
2
ARG QEMU_VERSION=4.2.0 
3
ENV QEMU_TARBALL="qemu-${QEMU_VERSION}.tar.xz" 
4
WORKDIR /qemu 
5

             
6
RUN apt-get update && \
7
    apt-get -y install \
8
                       wget \
9
                       gpg \
10
                       pkg-config \
11
                       python \
12
                       build-essential \
13
                       libglib2.0-dev \
14
                       libpixman-1-dev \
15
                       libfdt-dev \
16
                       zlib1g-dev \
17
                       flex \
18
                       bison



Download source.

Dockerfile
 




xxxxxxxxxx
1


 
1
RUN wget "https://download.qemu.org/${QEMU_TARBALL}" 
2

             
3
RUN # Verify signatures... 
4
RUN wget "https://download.qemu.org/${QEMU_TARBALL}.sig" 
5
RUN gpg --keyserver keyserver.ubuntu.com --recv-keys CEACC9E15534EBABB82D3FA03353C9CEF108B584 
6
RUN gpg --verify "${QEMU_TARBALL}.sig" "${QEMU_TARBALL}"



Extract tarball.

RUN tar xvf "${QEMU_TARBALL}"

Build source.

RUN "qemu-${QEMU_VERSION}/configure" --static --target-list=arm-softmmu 
RUN make -j$(nproc) 
RUN strip "arm-softmmu/qemu-system-arm"

Build the intermediary pidoc VM app image.

FROM ubuntu as pidoc-vm 
ARG RPI_KERNEL_URL="https://github.com/dhruvvyas90/qemu-rpi-kernel

/archive/afe411f2c9b04730bcc6b2168cdc9adca224227c.zip" 
ARG RPI_KERNEL_CHECKSUM="295a22f1cd49ab51b9e7192103ee7c917624b063cc5ca

2e11434164638aad5f4"

Transfer binary from build container to app container.

COPY --from=qemu-system-arm-builder /qemu/arm-softmmu/qemu-system-arm 

/usr/local/bin/qemu-system-arm

Download modified kernel and install.

Dockerfile
 




xxxxxxxxxx
1
14


 
1
ADD $RPI_KERNEL_URL /tmp/qemu-rpi-kernel.zip 
2

             
3
RUN apt-get update && \
4
    apt-get -y install \
5
                        unzip \
6
                        expect RUN cd /tmp && \
7
    echo "$RPI_KERNEL_CHECKSUM  qemu-rpi-kernel.zip" | sha256sum -c && \
8
    unzip qemu-rpi-kernel.zip && \
9
    mkdir -p /root/qemu-rpi-kernel && \
10
    cp qemu-rpi-kernel-*/kernel-qemu-4.19.50-buster /root/qemu-rpi-kernel/ && \
11
    cp qemu-rpi-kernel-*/versatile-pb.dtb /root/qemu-rpi-kernel/ && \
12
    rm -rf /tmp/* 
13

             
14
VOLUME /sdcard



Then we copy the entry point script from the host's main directory.

ADD ./entrypoint.sh /entrypoint.sh 
ENTRYPOINT ["./entrypoint.sh"]

Build the final app pidoc image with the Raspbian Lite filesystem loaded.

Dockerfile
 




xxxxxxxxxx
1


 
1
FROM pidoc-vm as pidoc 
2
ARG FILESYSTEM_IMAGE_URL="http://downloads.raspberrypi.org/raspbian_lite/images/raspbian_lite-2019-09-30/2019-09-26-raspbian-buster-lite.zip" 
3
ARG FILESYSTEM_IMAGE_CHECKSUM="a50237c2f718bd8d806b96df5b9d2174ce8b789eda1f03434ed2213bbca6c6ff" 
4

             
5
ADD $FILESYSTEM_IMAGE_URL /filesystem.zip 
6
ADD pi_ssh_enable.exp /pi_ssh_enable.exp 
7

             
8
RUN echo "$FILESYSTEM_IMAGE_CHECKSUM  /filesystem.zip" | sha256sum -c



The entrypoint.sh File

First, the script determines if the filesystem has been downloaded or not, and if not, it downloads and decompresses it.

Java
 




xxxxxxxxxx
1
17


 
1
#!/bin/sh 
2

             
3
raspi_fs_init() {
4
   image_path="/sdcard/filesystem.img"
5
  zip_path="/filesystem.zip"
6
    if [ ! -e $image_path ]; then
7
    echo "No filesystem detected at ${image_path}!"
8
    if [ -e $zip_path ]; then
9
        echo "Extracting fresh filesystem..."
10
        unzip $zip_path
11
        mv *.img $image_path
12
        rm $zip_path
13
    else
14
      exit 1
15
    fi
16
  fi
17
 }



The script then checks for an empty raspi-init file, which serves as a marker to determine if Expect has been launched previously to enable ssh on Raspbian.

Java
 




xxxxxxxxxx
1


 
1
if [ ! -e /raspi-init ]; then
2
  touch /raspi-init
3
  raspi_fs_init
4
  echo "Initiating Expect..."
5
  /usr/bin/expect /pi_ssh_enable.exp `hostname -I`
6
  echo "Expect Ended..."



If Expect has already been previously enabled, then we only need to launch QEMU, without Expect. Note that we are forwarding port 22 on Raspbian to port 2222 inside the Docker container.

Dockerfile
 




xxxxxxxxxx
1
15


 
1
else
2
  /usr/local/bin/qemu-system-arm \
3
        --machine versatilepb \
4
        --cpu arm1176 \
5
        --m 256M \
6
        --hda /sdcard/filesystem.img \
7
        --net nic \
8
        --net user,hostfwd=tcp:`hostname -I`:2222-:22 \
9
        --dtb /root/qemu-rpi-kernel/versatile-pb.dtb \
10
        --kernel /root/qemu-rpi-kernel/kernel-qemu-4.19.50-buster \
11
        --append "root=/dev/sda2 panic=1" \
12
        --no-reboot \
13
        --display none \
14
        --serial mon:stdio 
15
fi



Enable SSHD on Raspbian (Expect Tcl/Tk Method)

QEMU doesn't have a straightforward method for running configuration scripts on boot. And because Raspbian doesn't come with SSH enabled by default, we will have to turn it on ourselves. Our options are to do it manually or to use some sort of scripting tool that can interact with stdio. Another option is to customize the Raspbian image before installation. This would have to be done on the host, however, as Docker restricts the mounting of new filesystems. In any case, to make this build the most portable and host independent, the most straightforward for our purposes will be to use an Expect script, and have it copied into our Docker image on build.

The pi_ssh_enable.exp File

Since an unmodified Raspbian image has no accessible ports by default, we will use Expect to interface with stdio in QEMU, log in with a default username and password, and enable the sshd listener.

Java
 




xxxxxxxxxx
1
25


 
1
#!/usr/bin/expect -f 
2
set ipaddr [lindex $argv 0] 
3
set timeout -1 
4
spawn /usr/local/bin/qemu-system-arm \
5
  --machine versatilepb \
6
  --cpu arm1176 \
7
  --m 256M \
8
  --hda /sdcard/filesystem.img \
9
  --net nic \
10
  --net user,hostfwd=tcp:$ipaddr:2222-:22 \
11
  --dtb /root/qemu-rpi-kernel/versatile-pb.dtb \
12
  --kernel /root/qemu-rpi-kernel/kernel-qemu-4.19.50-buster \
13
  --append "root=/dev/sda2 panic=1" \
14
  --no-reboot \
15
  --display none \
16
  --serial mon:stdio 
17
expect "raspberrypi login:" 
18
send -- "pi\r" expect "Password:" 
19
send -- "raspberry\r" expect "pi@raspberrypi:" 
20
send -- "sudo systemctl enable ssh\r" 
21
expect "pi@raspberrypi:" 
22
send -- "sudo systemctl 
23
start ssh\r" 
24
expect "pi@raspberrypi:" 
25
expect eof



Build Image

In the folder with the Dockerfile, we will be building our two containers. The first will be our build container that includes all the dependencies for compiling QEMU, and the other will be our app container for running QEMU.

docker build -t pidoc.

Network Forwarding and Troubleshooting

Once the build is complete, bring it up detached and follow the logs.

docker run -itd --name testnode pidoc 
docker logs testnode -f

Raspbian will download and decompressed automatically, and QEMU should begin booting from the image.

Once Raspbian is fully booted, Expect should automatically enable sshd. Log into the docker container and test that SSH is reachable from inside the container on port 2222.

Dockerfile
 




xxxxxxxxxx
1


 
1
# docker exec -it testnode bash 
2
root@d4abc2f655e6:/# hostname -I 
3
172.17.0.3 
4
root@d4abc2f655e6:/# cat < /dev/tcp/172.17.0.3/2222 
5
SSH-2.0-OpenSSH_7.9p1 Raspbian-10



Cancel out and kill the container and remove the volume.

Dockerfile
 




xxxxxxxxxx
1


 
1
root@d4abc2f655e6:/# exit 
2
exit 
3
# docker 
4
kill testnode 
5
testnode 
6
# docker container rm 
7
testnode testnode



Testing the Docker Container

Start/Test Container

We will need to start the container for testing. This is primarily to gain some intuition about the performance of QEMU so that we can better make design decisions regarding our cluster. The system should come up clean with maybe a few benign warnings related to differences between the somewhat more generalized emulated hardware and the expected physical raspberry Pi hardware. I found it necessary to make sure port forwarding was working properly between QEMU and the Docker image so that I could further verify that port forwarding between the Docker image and host was working properly. Our first goal is to double forward SSH so that QEMU is accessible directly from the host.

docker run -itd -p 127.0.0.1:2222:2222 --name testnode pidoc 
docker logs testnode -f

Once the system again comes online, test for sshd on port 2222 of the host by using ssh to log into Raspbian:

Java
 




xxxxxxxxxx
1
20


 
1
# ssh pi@localhost -p 2222 
2
The authenticity of host '[localhost]:2222 ([127.0.0.1]:2222)' can't be established. 
3
ECDSA key fingerprint is SHA256:N0oRF23lpDOFjlgYAbml+4v2xnYdyrTmBgaNUjpxnFM. 
4
Are you sure you want to continue connecting (yes/no)? yes 
5
Warning: Permanently added '[localhost]:2222' (ECDSA) to the list of known hosts. 
6
pi@localhost's password: 
7
Linux raspberrypi 4.19.50+ #1 Tue Nov 26 01:49:16 CET 2019 armv6l 
8

             
9
The programs included with the Debian GNU/Linux system are free software; 
10
the exact distribution terms for each program are described in the 
11
individual files in /usr/share/doc/*/copyright. 
12

             
13
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent 
14
permitted by applicable law. 
15
Last login: Tue Jan 21 12:24:59 2020 
16

             
17
SSH is enabled and the default password for the 'pi' user has not been changed. 
18
This is a security risk - please login as the 'pi' user and type 'passwd' to set a new password. 
19

             
20
pi@raspberrypi:~ $



Testing Fractional CPU Utilization

To run this cluster, we're using a GCP n1-standard-4 instance (a 4x15) running Ubuntu 18.04 LTS. But we now notice how inefficient QEMU is once Raspbian begins doing anything. Multiple Raspberry Pi instances might stack fine if their idle, but if we want to keep the system viable, we will need to restrict CPU utilization on each instance, or else the system could be rendered unusable once more than a few nodes are put under load. Fortunately, Docker can handle this for us. We have 15GB of ram on this instance, so let's see what happens if we are slightly more ambitious and squeezed 6 Raspberry Pi containers onto our VM. We will have a whole core left for the host to manage other tasks without much risk of a failure. We can scale this at some point later with Docker Compose.

We will run two test containers at 50% and 100% for benchmark testing.

docker run -itd --cpus="0.50" -p 127.0.0.1:2250:2222 

--name pidoc_50_test pidoc 
docker run -itd --cpus="1.00" -p 127.0.0.1:2200:2222 

--name pidoc_00_test pidoc

At this point, we technically already have a cluster. We just don't have a method to manage them, except by hand.

Performance

While a full core allocation performs at near Physical Raspberry Pi speeds, an instance running at 50% runs rightly at half that speed. This might be manageable under certain circumstances, but it's not the most desirable. The overall efficiency of the cluster may increase, depending on the task at hand. But for now, we will continue with our original full core allocation of 3 nodes, and then later it tests with 6 nodes.

Single Thread Benchmarks

Testing can be done by using the following simple benchmark tests.

CPU Prime Test

sysbench --test=cpu --cpu-max-prime=9999 run

CPU Integer Test

time $(i=0; while ((i<9999999)); do ((i++)); done)

HDD Read Test

dd bs=16K count=102400 iflag=direct if=test_data of=/dev/null

HDD Write Test

dd bs=16k count=102400 oflag=direct if=/dev/zero of=test_data

Results (Single Thread)

For this guide, we will only focus on the CPU Prime Test using sysbench

Host

Java
 




xxxxxxxxxx
1
14


 
1
General statistics:
2
    total time:                          10.0009s
3
    total number of events:              9417 
4

             
5
Latency (ms):
6
         min:                                  1.04
7
         avg:                                  1.06
8
         max:                                  1.63
9
         95th percentile:                      1.10
10
         sum:                               9992.36 
11

             
12
Threads fairness:
13
    events (avg/stddev):           9417.0000/0.00
14
    execution time (avg/stddev):   9.9924/0.00


Virtual Raspberry Pi - Limit: 100%

Java
 




xxxxxxxxxx
1
13


 
1
Test execution summary:
2
    total time:                          397.8781s
3
    total number of events:              10000
4
    total time taken by event execution: 397.4056
5
    per-request statistics:
6
         min:                                 38.61ms
7
         avg:                                 39.74ms
8
         max:                                 57.15ms
9
         approx.  95 percentile:              40.92ms 
10

             
11
Threads fairness:
12
    events (avg/stddev):           10000.0000/0.00
13
    execution time (avg/stddev):   397.4056/0.00


Virtual Raspberry Pi - Limit: 50%

Java
 




xxxxxxxxxx
1
13


 
1
Test execution summary:
2
    total time:                          823.8272s
3
    total number of events:              10000
4
    total time taken by event execution: 822.9329
5
    per-request statistics:
6
         min:                                 38.68ms
7
         avg:                                 82.29ms
8
         max:                                184.02ms
9
         approx.  95 percentile:              94.65ms 
10

             
11
Threads fairness:
12
    events (avg/stddev):           10000.0000/0.00
13
    execution time (avg/stddev):   822.9329/0.00


Compose the Cluster

Create docker-compose.yml File

We will use Docker Compose for cluster creation. Initially, we will keep this at three nodes to keep it easy to manage. Once we have a proof of concept cluster, we can then scale it out. The most straightforward way to handle this is to map separate ports to localhost for each container. We can specify a range of ports to be used in the docker-compose.yml file, as noted below.

version: '3'

services:  node:
    image: pidoc
    ports:
      - "2201-2203:2222"

Bring Up Cluster

To bring up three nodes with docker-compose, use the --scale option.

docker-compose up --scale node=3

Ansible Configuration

Now that we have all the infrastructure in place for a cluster, we need to manage it. We could use Docker to double attach to the QEMU monitor, but ssh is much more robust. Since we are using ssh, we can use Ansible. A few basic operations are provided here: update, upgrade, reboot, and shutdown. These can be expanded as needed to develop a more robust system.

hosts File

Please note of the ports we specified in the docker-compose.yml file earlier, and edit your hosts inventory accordingly.

Dockerfile
 




xxxxxxxxxx
1


 
1
[all:vars] 
2
ansible_user=pi 
3
ansible_ssh_pass=raspberry 
4
ansible_ssh_extra_args='-o StrictHostKeyChecking=no' 
5

             
6
[pidoc-cluster] 
7
node_1.localhost:2201 
8
node_2.localhost:2202 
9
node_3.localhost:2203



For a more comprehensive walkthrough of Ansible, please read How to Install and Configure Ansible on Ubuntu.

update.yml File

YAML
 




xxxxxxxxxx
1


 
1
--- 
2
- name: Apt update Pi...
3
  hosts: pidoc-cluster
4
  tasks:
5
    - name: Update apt cache...
6
      become: yes
7
      apt:
8
        update_cache=yes



Usage: ansible-playbook playbooks/update.yml -i hosts

upgrade.yml File

YAML
 




xxxxxxxxxx
1
11


 
1
--- 
2
- name: Upgrade Pi...
3
  hosts: pidoc-cluster
4
  gather_facts: no
5
  tasks:
6
    - name: Update and upgrade apt packages...
7
      become: true
8
      apt:
9
        upgrade: yes
10
        update_cache: yes
11
        cache_valid_time: 86400



Usage: ansible-playbook playbooks/upgrade.yml -i hosts

reboot.yml File

YAML
 




xxxxxxxxxx
1
14


 
1
--- 
2
- name: Reboot Pi...
3
  hosts: pidoc-cluster
4
  gather_facts: no
5
  tasks:
6
    - name: Reboot Pi...
7
      shell: shutdown -r now
8
      async: 0
9
      poll: 0
10
      ignore_errors: true
11
      become: true
12
     - name: Wait for reboot...
13
      local_action: wait_for host={{ ansible_host }}state=started delay=10
14
      become: false



Usage: ansible-playbook playbooks/reboot.yml -i hosts

shutdown.yml File

YAML
 




xxxxxxxxxx
1
15


 
1
--- 
2
- name: Shutdown Pi...
3
  hosts: pidoc-cluster
4
  gather_facts: no
5
  tasks:
6
    - name: 'Shutdown Pi'
7
      shell: shutdown -h now
8
      async: 0
9
      poll: 0
10
      ignore_errors: true
11
      become: true
12

             
13
     - name: "Wait for shutdown..."
14
      local_action: wait_for host={{ ansible_host }} state=stopped
15
      become: false



Usage: ansible-playbook playbooks/shutdown.yml -i hosts

Scaling Up

Docker Compose makes scaling Raspberry Pi containers on the same host near trivial. By using Ansible for cluster management, it also becomes incredibly easy to scale horizontally to other hosts by changing the port binding from localhost to an IP address that's routable. Here is our example with 6 nodes instead of 3.

docker-compose.yml File

Dockerfile
 




xxxxxxxxxx
1
10


 
1
version: '3' 
2

             
3
services:  node:
4
    image: pidoc
5
    ports:
6
      - "2201-2212:2222"
7
    deploy:
8
      resources:
9
        limits:
10
          cpus: "0.5"



Bring Up Cluster

We should stop containers from our previous cluster, and prune all volumes before scaling up our revised cluster. To bring up all 6 nodes with docker-compose, use the --scale option again.

docker-compose up --scale node=5

Future Work

Raspberry Pi emulation is still under development for QEMU. While the configuration for this project is relatively stable, there's a lot of room for improvement. Attempting a migration to Raspberry Pi 3 emulation would be an ambitious next step. Docker Compose, though designed for single-host builds, is already easy enough to replicate to other hosts manually or through Ansible. But it could just as easily be scaled out with Swarm or k8s, enabling us to build an emulated Raspberry Pi cluster of any size. Additionally, with one or more port redirects, other systems of control can be put into place, including various node endpoints, depending on purpose and application.

Docker (software) raspberry pi Host (Unix) operating system Build (game engine) Java (programming language) Testing app Open source

Published at DZone with permission of Sudip Sengupta. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Keep Your Application Secrets Secret
  • Buildpacks: An Open-Source Alternative to Chainguard
  • Building a 24-Core Docker Swarm Cluster on Banana Pi Zero
  • How To Build a Multi-Zone Java App in Days With Vaadin, YugabyteDB, and Heroku

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!