{{announcement.body}}
{{announcement.title}}

Automating Workflows with GitHub Action in Nebula Graph

DZone 's Guide to

Automating Workflows with GitHub Action in Nebula Graph

An in-depth tutorial on how to automate workflows with GitHub Action in Nebula Graph, an open-source graph database.

· Open Source Zone ·
Free Resource

Nebula Graph is an open-source graph database. It initially implemented its automated testing with Jenkins, built on Azure, and GitHub Webhook. When a pull request is opened, adding a ready-for-testing label with a comment “Jenkins go” will automatically trigger the corresponding unit testing process:

unit testing passed

However, this solution is not cost-effective because the Azure cloud server is rented and Nebula Graph compilation requires high-performance servers. So, for many months now, the team has been considering an alternative to Azure cloud server. The new solution must support multi-environment tests. The development team has done some research and found the following candidates:

  1. TravisCI

  2. CircleCI

  3. Azure Pipeline

  4. Jenkins on k8s (Self-hosted)

These products are all user friendly. But there are some restrictions on open-source projects.

Given the previous experience of using GitLab CI, the team realized the first choice should be a product with deep integration with GitHub. It should allow the sharing of the entire open-source ecosystem of GitHub and natively call the APIs. Coincidently, GitHub Action 2.0 was released in 2019. So, the Nebula Graph team moved to explore it.

For the team, GitHub Action is useful in the following ways:

  1. It is free. For open-source projects,  the full feature set is available and it also offers high-performance machines, for free.

  2. A powerful open-source ecosystem. All open-source actions can be directly used during the entire continuous integration (CI) process. It also supports custom actions. GitHub Action supports customizations in Docker, which means you can create a custom action with just bash commands.

  3. It supports multiple systems. There is one-click deployment on Windows, macOS, and Linux, which makes cross-platform operations easier.

  4. Interaction with the GitHub API. You can directly access the GitHub API V3 with GITHUB_TOKEN so that you can upload files and check PR status with the curl command.

  5. GitHub-hosted runners. Simply add the workflow description file under the .github/workflows/ directory and each commit will automatically trigger a new action run.

  6. Workflow description file in YAML.  This is more concise and readable than the Action 1.0 workflow.

Testing cannot be overemphasized for a database solution. In Nebula Graph, testing is mainly divided into unit testing and integration testing. GitHub Action is mainly used for automating unit testing. Meanwhile, it is also used to prepare for integration testing, for example with Docker image building and installer packaging. Finally, it solves release requirements for the PM lady. In this way, the team has built the first version of the CI/CD process.

PR Test

As an open-source project hosted on GitHub, Nebula Graph must solve the primary testing problem of quickly verifying changes in a PR offered by a contributor. The following aspects should be taken into consideration:

  1. Does the code meet Nebula Graph’s coding style?

  2. Can the code be compiled on different systems?

  3. Does it pass all unit tests?

  4. Has the code coverage dropped?

Only if all the above requirements are met and there are at least two approvals, will the changes be merged into the master. With the help of open-source tools such as cpplint or clang-format, requirement #1 can be easily met. If # 1 fails, the following steps will be automatically skipped.

For requirement #2, the team seeks to compile Nebula Graph source code on the currently supported systems. Thus, building directly on the physical machines is no longer a choice. After all, the price of one single machine is rather high, not to mention one machine is not enough. To ensure the consistency of the compilation environments and reduce performance loss as much as possible, the team finally chose Docker. The process went smoothly with GitHub Action’s job matrix and its support for Docker.

nebula graph compliation image

As shown above, the Nebula Graph’s compilation image is maintained in their Docker image project. Any changes or upgrades to the compiler or third party dependencies will automatically trigger the Build task of Docker hub (see the figure below). When a new Pull Request is committed, the GitHub Action will be triggered to start pulling the latest compilation image and execute the compilation.

executing the compilation

For a complete description of the PR workflow, see pull_request.yaml. Meanwhile, considering that not every PR needs to be immediately tested, and the self-hosted machine resources are limited, the development team has set the following constraints to the CI trigger:

  1. Only PRs that pass the lint verification will deliver the subsequent job to the self-hosted runner. The lint task is relatively light-weight and can be executed in the machine hosted by GitHub Action. This avoids using up internal resources.

  2. Only PRs labeled with ready-for-testing will trigger an action execution. While labeling needs authority, the runner can only be triggered by certified pull requests. See the code below for the PR label-restriction:

YAML
 




x


1
jobs:
2
  lint:
3
    name: cpplint
4
    if: contains(join(toJson(github.event.pull_request.labels.*.name)), 'ready-for-testing')



Here is how it looks when a PR passes all the tests: 

all tests passed

For details on how Code Coverage is conducted in Nebula Graph, please see Integrating Codecov Test Coverage With Nebula Graph.

Nightly Building

Nebula Graph’s integrated testing framework requires that all the test cases be run on the code in the codebase every night. Meanwhile, the team wants some new features to be quickly packaged and delivered to users for a test drive. This requires that the CI system provides the Docker image and rpm/deb package of the codebase each day.

In addition to the pull_request event type, the GitHub Action can also be triggered by the schedule type. Like crontab, the schedule allows users to specify the trigger time of any repetitive tasks. For example, execute tasks at 2:00 AM every day:

YAML
 




x


1
on:
2
  schedule:
3
    - cron: '0 18 * * *'


GitHub uses UTC, so 2:00 AM CST is 6:00 PM UTC the previous day.

Docker

The daily built Docker image needs to be pushed to the Docker Hub and tagged with the nightly label. Here we set the image pulling method as Always in the k8s cluster for integration testing. This is so the daily request to upgrade Nebula Graph would trigger a rolling upgrade to the latest Docker image of the current day, i.e. the nightly version, for integrating test. The development team is attempting to not to leave problems raised today for tomorrow. There is no additional date tag for the nightly image. See the action details below:

YAML
 




xxxxxxxxxx
1


1
- name: Build image
2
        env:
3
          IMAGE_NAME: ${{ secrets.DOCKER_USERNAME }}/nebula-${{ matrix.service }}:nightly
4
        run: |
5
          docker build -t ${IMAGE_NAME} -f docker/Dockerfile.${{ matrix.service }} .
6
          docker push ${IMAGE_NAME}
7
        shell: bash


Package

GitHub Action provides artifacts to allow users to persist data in a workflow. GitHub stores these artifacts for 90 days, which is more than enough for the storage of the nightly installation package. Using the official actions / upload-artifact @ v1 action, you can easily upload files in the specified directory to artifacts. Following is how the Nebula Graph nightly package looks:

nebula graph nightly packag

Branch Releasing

For better maintenance and bug fixes, Nebula Graph adopts the branch release approach. For example, this means freezing the code before each release and creating a new release branch. Only bug fixes are allowed on the release branch. Feature developments are not allowed. The bug fixes will still be committed to the development branch and will be cherry-picked to the release branch.

At each release, in addition to the source code, the team seeks to add the installation package to assets for users to download. Doing it manually is both error-prone and time-consuming. GitHub Action is ideal for this. Furthermore, the packaging and uploading use of the internal GitHub network, which is faster.

After the installation package is compiled, you can directly call the GitHub API through the curl command to upload it to the assets. The script looks as follows:

YAML
 




xxxxxxxxxx
1


 
1
curl --silent \
2
     --request POST \
3
     --url "$upload_url?name=$filename" \
4
     --header "authorization: Bearer $github_token" \
5
     --header "content-type: $content_type" \
6
     --data-binary @"$filepath"



At the same time, for the sake of security, every time the installation package is released, the team seeks to calculate its checksum value and upload the value to the assets for a user’s convenience of an integrity check after downloading. The steps are as follows:

YAML
 




x
34


 
1
jobs:
2
  package:
3
    name: package and upload release assets
4
    runs-on: ubuntu-latest
5
    strategy:
6
      matrix:
7
        os:
8
          - ubuntu1604
9
          - ubuntu1804
10
          - centos6
11
          - centos7
12
    container:
13
      image: vesoft/nebula-dev:${{ matrix.os }}
14
    steps:
15
      - uses: actions/checkout@v1
16
      - name: package
17
        run: ./package/package.sh
18
      - name: vars
19
        id: vars
20
        env:
21
          CPACK_OUTPUT_DIR: build/cpack_output
22
          SHA_EXT: sha256sum.txt
23
        run: |
24
          tag=$(echo ${{ github.ref }} | rev | cut -d/ -f1 | rev)
25
          cd $CPACK_OUTPUT_DIR
26
          filename=$(find . -type f \( -iname \*.deb -o -iname \*.rpm \) -exec basename {} \;)
27
          sha256sum $filename > $filename.$SHA_EXT
28
          echo "::set-output name=tag::$tag"
29
          echo "::set-output name=filepath::$CPACK_OUTPUT_DIR/$filename"
30
          echo "::set-output name=shafilepath::$CPACK_OUTPUT_DIR/$filename.$SHA_EXT"
31
        shell: bash
32
      - name: upload release asset
33
        run: |
34
        ./ci/scripts/upload-github-release-asset.sh github_token=${{ secrets.GITHUB_TOKEN }} repo=${{ github.repository }} tag=${{ steps.vars.outputs.tag }} filepath=${{ steps.vars.outputs.filepath }}
35
          ./ci/scripts/upload-github-release-asset.sh github_token=${{ secrets.GITHUB_TOKEN }} repo=${{ github.repository }} tag=${{ steps.vars.outputs.tag }} filepath=${{ steps.vars.outputs.shafilepath }}
36
 
          



See release.yaml for the complete workflow file.

Commands

GitHub Action provides some Shell commands so that you can control and debug each workflow step in greater granularity right in your Shell console. Some commonly used commands are explained below.

set-output: Setting an output parameter

YAML
 




xxxxxxxxxx
1


1
::set-output name={name}::{value}


Sometimes you need to pass results among job steps. You can set the output_value to the output_name variable via command echo "::set-output name=output_name::output_value".

In the following steps, you can refer to the above output value via ${{ steps.step_id.outputs.output_name }}.

This method is used in the job to upload assets mentioned in the previous section. One step can set multiple outputs by executing the above command multiple times.

set-env: Setting an Environment Variable

YAML
 




xxxxxxxxxx
1


1
::set-env name={name}::{value}


Similar to set-output, you can create an environment variable for subsequent steps in the current job. Syntax: echo "::set-env name={name}::{value}".

add-path: Adding a System Path

YAML
 




xxxxxxxxxx
1


1
::add-path::{path}


This command is to prepend a directory to the system PATH variable for all subsequent steps in the current job. Syntax: echo "::add-path::{path}".

Self-Hosted Runner

In addition to GitHub-hosted runners, Action also allows you to host runners on your machine. After installing the Action Runner on the machine, follow this tutorial to add it to your repository, and configure runs-on: self-hosted in the workflow file. 

You can assign different labels to your self-hosted machines. In this way, you can distribute tasks to a machine with a specific label. For example, if your machines run on different operating systems, then a job can be assigned to a specified machine based on the runs-on label.

distributing tasks with specific label

Security Enhancements

GitHub does not recommend self-hosted runners for open-source projects because anyone can attack the runner machine by committing a PR with dangerous code. However, the Nebula Graph compilation requires larger storage than GitHub’s two-core environment. This leaves Nebula Graph no choice but to self-host runners. To ensure security, the team has done the following:

Deployment on VM

All runners registered to the GitHub Action are deployed in virtual machines, which can isolate the host machine and make it easier to allocate resources among virtual machines. A high-performance host machine can allocate multiple virtual machines to run all the received tasks in parallel. If there is a problem with the virtual machines, you can easily restore the environment.

Network Isolation

The development team has isolated all the virtual machines that hold the runner from the office network to avoid direct access to our internal resources. Even if a PR contains malicious code, it cannot access its internal network for further attacks.

Choose the Right Action

The team makes a concerted effort to choose actions from well-known companies or official releases. If you are using the work of an individual developer, it is best to check their implementation code to avoid becoming a victim of privacy keys leakage.

Here is the list of official actions provided by GitHub.

Private Token Verification

GitHub Action will automatically check whether there are private tokens in PR. No private tokens (referred to with ${{ secrets.MY_TOKENS }}), except GITHUB_TOKEN, can be used in a PR event-triggered job. This prevents users from stealing tokens by privately printing it out through PR.

Environment Building and Clearing

For self-hosted runners, it is convenient to share files between different jobs. But do not forget to clean up the intermediate files each time after the entire action is executed. Otherwise, they may affect the following jobs and occupy disk space.

YAML
 




xxxxxxxxxx
1


1
- name: Cleanup
2
        if: always()
3
        run: rm -rf build


Also, set the running condition of step to always () to ensure that the cleanup is executed every time, even if something goes wrong during execution.

Parallel Building Based on Docker Matrix

The development team chose to build a Nebula Graph with a container because it needs to compile and verify various operation systems. The container makes it easy to separate environments. GitHub Action natively supports Docker-based tasks.

GitHub Action supports the matrix strategy to run tasks, which is similar to TravisCI’s build matrix. By combining systems and compilers, the team can easily use gcc and clang in each system to compile the source code of the Nebula Graph. The Matrix example is shown below:

YAML
 




xxxxxxxxxx
1
19


1
jobs:
2
  build:
3
    name: build
4
    runs-on: ubuntu-latest
5
    strategy:
6
      fail-fast: false
7
      matrix:
8
        os:
9
          - centos6
10
          - centos7
11
          - ubuntu1604
12
          - ubuntu1804
13
        compiler:
14
          - gcc-9.2
15
          - clang-9
16
        exclude:
17
          - os: centos7
18
            compiler: clang-9


The above strategy generates 8 parallel tasks (4 OS x 2 compiler). Each task is a combination of an OS and a compiler. This greatly reduces the workload of manual definition for different dimensions.

You can exclude a certain combination in the matrix by adding it to the exclude option. If you want to access the value in the matrix in the task, you can get it by obtaining the value of the context variable like $ {{matrix.os}}. These methods make it very convenient to customize your tasks.

Runtime Container

A user can specify a container environment for each task at runtime so that all steps of the task will be executed in the container’s internal environment. Compared to applying the docker command in each step, this is simpler and clearer.

YAML
 




xxxxxxxxxx
1


1
container:
2
      image: vesoft/nebula-dev:${{ matrix.os }}
3
      env:
4
        CCACHE_DIR: /tmp/ccache/${{ matrix.os }}-${{ matrix.compiler }}


For container configuration, like configuring service in Docker compose, you can specify image/env/ports/volumes/options and other parameters. In the self-hosted runner, you can easily mount the directory on the host machine to the container for file sharing.

It is the container characteristics of GitHub Action that make it convenient to accelerate subsequent compilation via cache in Docker.

Compilation Acceleration

The source code of Nebula Graph is written in C++, and the construction process is rather time-consuming. Restarting CI every time will cause a waste of computing resources. So if the source code isn’t updated, the compiled file will be cached for acceleration. Currently, the team uses the latest version of ccache for cache purposes. It also helps identify whether a source file has been updated or not by looking precisely into the compiling process of the file.

Although GitHub Action itself provides the cache function, the team opted for the local cache strategy. This is because Nebula Graph currently uses static linking for unit test use cases and its size after compilation exceeds the quota assigned by GitHub Action cache.

ccache

ccache is a compiler cache tool. It speeds up compilation by caching previous compilations and supporting compilers like gcc/clang. Nebula Graph adopts the C++ 14 standard which has compatibility issues with previous versions of ccache. So, ccache in all vesoft/nebula-dev images is\ manually compiled and installed.

Nebula Graph automatically detects whether ccache is installed in the cmake configuration and decides whether to enable it. So, you only configure ccache in the container environment. For example, you can configure the maximum cache capacity in ccache.conf as 1 Gigabyte. When the cache surpasses the threshold, the older cache is automatically replaced.

YAML
 




xxxxxxxxxx
1


 
1
max_size = 1.0G


We suggest you put the ccache.conf configuration file under the cache directory so that ccache can conveniently read the file.

tmpfs

tmpfs is a temporary file system located in the memory or swap partition, which can effectively alleviate the delay caused by disk IO. Because the memory of a self-hosted machine is sufficient, the ccache directory mount type is changed to tmpfs to reduce ccache read and write time. To use tmpfs mounting type in Docker, please refer to the Use tmpfs mounts documentation. The corresponding configuration parameters are as follows:

YAML
 




xxxxxxxxxx
1


1
env:
2
      CCACHE_DIR: /tmp/ccache/${{ matrix.os }}-${{ matrix.compiler }}
3
    options: --mount type=tmpfs,destination=/tmp/ccache,tmpfs-size=1073741824 -v /tmp/ccache/${{ matrix.os }}-${{ matrix.compiler }}:/tmp/ccache/${{ matrix.os }}-${{ matrix.compiler }}


Place all cache files generated by ccache in a directory that mounting as a tmpfs type.

Parallel Compilation

The making process itself supports the parallel compilation of multiple source files. Configuring-j $(nproc) during compilation will enable the same number of tasks as the number of cores. Configure the steps in action as follows:

YAML
 




xxxxxxxxxx
1


1
- name: Make
2
        run: cmake --build build/ -j $(nproc)


Things to Improve

As noted, there are a lot of advantages with GitHub Action, but are there drawbacks? After spending some time using it, below are some thoughts to share:

  1. Only supports systems of newer versions. Many actions are developed based on newer Node.js versions and cannot be used directly in old Docker containers like CentOS 6. If attempted, it will throw an error ‘the library file that Nodejs depends on cannot be found’. Thus, an action cannot be properly started. Since the Nebula Graph also supports CentOS 6, the tasks in this system must be handled differently.

  2. It is not easy to verify locally. Although there is an open-source project act in the community, there are still many restrictions. For example, sometimes you must repeatedly commit to your repository to ensure the action modification is correct.

  3. Currently lacking guidelines. When customizing numerous tasks, it feels like you are coding in the YAML configuration. There are currently three main approaches: You can split the configuration files based on tasks. You can customize an action via a GitHub SDK. Or, you can write a long shell script to complete the tasks and call the script in your tasks.

So far, it is still under debate in the community which approach is better, whether a combination of small tasks or big tasks. The Nebula Graph development team found the approach to combine small tasks helps easily locate task failures and determine the execution time of each step.


  1. Part of the action history cannot be cleaned up. If the workflow name is changed, the old check runs record will remain on the action page, affecting the user experience.

  2. Lacking manual job/task trigger like GitLab CI. No manual intervention is allowed during action execution.

  3. The development of action is under constant iteration. Sometimes it is necessary to maintain an upgrade. For example, checkout@v2.

Overall, GitHub Action is a highly useful CI/CD system. As a product that stands on the shoulders of predecessors like GitLab  CI/Travis CI, there is a lot to learn.

What’s Next

Customized Action

A while ago, Docker released its first Action to simplify the Docker related tasks. In the future, the team will also customize actions dealing with the complex CI/CD requirements and enable them in all Nebula Graph repositories.

For some general actions, like appending assets to the release function, the team will put them in an independent repository and publish them in the action marketplace. The exclusive ones will be placed in the .github/actions directory of each repository.

This simplifies the YAML configuration in workflows. You only need to use a customized action which has better flexibility and expandability.

Integration with IM (DingTalk/Slack)

You can develop complex action applications through the GitHub SDK and combine it with the customization bots of IM tools like DingTalk and Slack to realize a lot of automation and interesting applications.

For example, when a PR is approved by more than two reviewers and all check runs are passed, you can send a message to a DingTalk group and tag someone to merge them. This saves engineers from checking every PR state on the PR list.


Topics:
github actions, graph database

Published at DZone with permission of Jamie Liu . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}