IoT DevOps Hands-On (Day 2): Travis CI

DZone 's Guide to

IoT DevOps Hands-On (Day 2): Travis CI

Interested in seeing how DevOps-oriented tools and technology can be applied to IoT development? Let's check out how well-suited Travis CI is for IoT devices.

· IoT Zone ·
Free Resource

In the previous article, we had the chance to talk about the need for DevOps in IoT device development and whether the current CI/CD solutions are good enough for this by trying to develop an actual IoT product — a vehicle tracking device. In this article, we start with our first contestant, Travis CI. There is a very good reason to start with Travis CI — I really like their 80s looking mascot with the magnificent mustache!

Image title

(Boring) BackgroundImage title

Travis CI provides a virtual environment that we can use to build and test our code by executing one or more user scripts and/or shell commands. Based on the result of these scripts, Travis CI will save the build as successful or failed. Travis CI can also deploy the code to a web server or application host but this option is not really useful in our case since we need to deploy the code to an actual device.

A build on Travis CI is made up of two sequential phases:

  • Install phase: where we install all the required dependencies

  • Script phase: where we run our build and test scripts

Additional phases are available that we can also use before or after the install and script phase:

  • before_install

  • before_script

  • after_success, after_failure

  • after_script

  • before_deploy, deploy, after_deploy

In the Travis CI world, a job is a process that clones the code in a virtual environment and runs a series of the above phases. A build is a group of jobs. And build stages are a way to have sequential groups of jobs. For example, we may have a 'build' stage with one group of jobs that will be followed by a "testing" stage with another group of jobs and so on.

The build stages is a beta feature and, to be honest, the concept of multiple jobs is a little bit unclear to me. They don't seem to have a solid way to define a job other than using environment variables to trigger multiple builds. That is OK if, for example, I want to build against two library versions, but it's not what we really have in mind when we talk about a job process in CI/CD.

Moreover, the way the .travis.yml file is designed makes me assume that there are only a number of phases that belong to a single (undefined) job. My best guess is that the concept of jobs was not available from the very beginning in Travis CI but if we have any Travis CI experts here, I would be glad to hear more about this.

Initial Setup

Travis CI has a tight integration with GitHub, so it was really easy to enable it in my GitHub account. I just went to Travis CI frontpage and signed in using my GitHub credentials. After that, I was provided with the option to select which repositories I would like to activate. I selected the 'vehicle-tracking' repository, as you can see below.

Travis CI

What remains is to add a .travis.yml file in our repository and push some code to trigger Travis CI. I followed their getting started guide to learn more about the .travis.yml file, which is where all the configuration for setting up Travis CI can be added.

Playing With .travis.yml

The ultimate goal is to build the 'vehicle-tracking' application and then use the resulted binary to update one or more devices and run some simple test cases. The first step is to build the application and to do that, I need build instructions and a toolchain. The build instructions can be found in the firmware repository that we forked earlier. They are in the form of makefiles, and they can be used to build external applications (like ours) to the firmware. For the toolchain, we can use the official GNU ARM Embedded Toolchain since we are dealing with a Cortex-M3 MCU. The same toolchain can be used in a big number of today's IoT devices.

 I've created a first version of the .travis.yml in the root of the vehicle-tracking application, as you can see below: 

language: bash

    - echo "---> before_install phase..."
    - sudo apt-get update && sudo apt-get install -y make libarchive-zip-perl git gcc-multilib vim

    - echo "---> install phase..."
    - pwd
    - git clone https://github.com/cpipilas/firmware.git firmware
    - cd firmware
    - git checkout release/stable
    - cd ..
    - wget https://launchpad.net/gcc-arm-embedded/4.9/4.9-2015-q3-update/+download/gcc-arm-none-eabi-4_9-2015q3-20150921-linux.tar.bz2
    - tar xjf gcc-arm*.tar.bz2
    - export PATH=$PATH:/home/travis/build/cpipilas/vehicle-tracking/gcc-arm-none-eabi-4_9-2015q3/bin
    - arm-none-eabi-gcc --version

    - echo "---> before_script phase..."
    - mkdir particleGeoLoc && cp -Rf vehicle_tracking.cpp libs particleGeoLoc

    - echo "---> script phase..."
    - cd firmware/main
    - make all PLATFORM=photon APPDIR=../../particleGeoLoc
    - cd ../../

    - echo "Tests completed successfully!"

    - echo "Some test cases failed!"

Most of the above instructions in the .travis.yml file are self-explanatory, but I will quickly go through some of the most important.


This option is used to set the preferred programming language and provide more build options for your project. That would be great if we were creating a project in Java or Ruby, but sadly, this is useless in an embedded project like ours since we'll deal with a cross toolchain. I ended up using 'bash' for my language. Yes I know, the life of a second-class passenger...


This can be used to install some basic stuff in the environment before installing the actual dependencies. By the way, we are talking about a Linux Ubuntu environment running in a container.


This is where all the dependencies should be installed. I used this phase to clone the firmware repository, download the ARM tools, and update my PATH to include the new toolchain. Needless to say, we can run all these steps as one big Linux command, but for readability reasons, I mostly used individual steps for each command.


Here we can do some preparations before running the actual build and tests. I just moved the application in a separate folder, needed for the build.


This is the place to run your scripts or standalone commands. The only thing that I want to do for the time being is to build the application, so I added the 'make' command to build an external application for the Particle Photon platform. 


These are placeholders to take some further actions on success or failure of the script phase.

Gun (Travis) Trigger

In theory, as soon as we commit and push the .travis.yml file in GitHub, it will trigger a new Travis CI build. Here I have to say that on the first try, nothing happened, and I didn't get any feedback for the reason. Later I realized that I had a syntax error. Anyway, now I was able to see something in the Travis CI web interface. The navigation in the website is not ideal, so make sure to go to the 'Build History' section. Below, I was waiting for about 40+ seconds  for the virtual environment to start:

Travis CI web interfaceAnd then I got a glorious terminal interface with some nice red fatal errors:

Travis CI virtual environment

It seems that Travis CI is not able to handle my library submodule, even if it is a public repository. This is actually expected because Git will use SSH to initialize the submodules, so we have to find a workaround. Checking the documentation, it seems that Travis CI can bypass the submodules using the following option:

    submodules: false

What remains is to check out the submodule using HTTPS, so add the following commands in the 'before_install' phase:

    - sed -i 's/git@github.com:/https:\/\/github.com\//' .gitmodules
    - git submodule update --init --recursive

After pushing the new .travis.yml file, everything worked as expected:

Image title

This is a good indication that the application firmware was built successfully and a new device binary created, ready to be installed on our device.

I know where the binary is, but how can we get it? Well, we can't get it directly from Travis CI — it seems that there is no support for that. Instead, we can either use the GitHub releases feature to upload the binaries as git tags or use Amazon S3. I would say I hate both options because they add complexity and extra configuration — I just wanted to have the binary ready for download, either from their web interface or something else.

Just a reminder, we are not talking about a web application that we have already tested in the script phase and that's ready to be deployed. We are talking about an embedded application that needs to be deployed on the device before the actual testing.

Getting More Juice

Now that we have something that is working, we can try to add more stuff in Travis CI. 

Travis CI supports multiple parallel builds using a build matrix with the help of environment variables. So, it's possible to build multiple binaries for different hardware platforms or run various tests cases with different options. 

For example, we can produce different binaries from the make command below...

make all PLATFORM=$hw_plat APPDIR=../../particleGeoLoc

...by simply defining the following section in the .travis.yml file:

        - hw_plat=photon
        - hw_plat=electron

This will start two jobs in parallel, one using 'hw_plat=photon' and the other using "hw_plat=electron". It's a nice-to-have feature, but I'm not sure about the way it's implemented. I would prefer a better way to export variables in the virtual environment.

Anyway, I think it's time to move all our shell commands that we have in the script phase inside a shell script. Below, I created a build.sh script that will generate the actual artifacts for the device:


cd firmware/main
make all PLATFORM=$hw_plat APPDIR=../../particleGeoLoc
cd ../../

And I've also updated the script phase in the .travis.yml file:

    - echo "---> script phase..."
    - bash build.sh
    - echo $?

After pushing the code to GitHub, we get the following output from Travis CI:

Travis CI with 2 jobs

Note the creation of two separate jobs (or virtual environments) because of the definition of the hw_plat variable that we talked earlier. The result of the build was successful, but one interesting thing that you should be aware of is that the 'echo $?' should not be used to check for the exit code of the previous command, since the result is undefined. Most probably, that happens because each line in the .travis.yml file is getting more processing internally. 

echo $? result

Now that I'm happy with the binary creation, we'll see how can we update our device and how can we run some simple test cases. This means that we have to use the Particle cloud REST API. According to Particle documentation, an example request to update the device firmware is :

$ curl -X PUT "https://api.particle.io/v1/devices/0123456789abcdef01234567?access_token=1234" \
       -F file=@my-firmware-app.bin \
       -F file_type=binary

So we need the access token from our Particle account, the Particle device ID (the large number in the above example) and the binary file that we are going to use to update the device. Both the access token and the device ID can be retrieved from our Particle account. The thing is that we don't want to share this information so it would be great if we can find a way to hide them from the source code and the Travis CI configuration. Fortunately, Travis CI supports encrypted variables for sensitive data, well done Travis!

We can encrypt the variables with the help of Travis CLI. This tool is written in Ruby and published as a gem. First, you need to install the gem:

gem install travis

Then, from the repository directory, we encrypt the variables that we want using the command below:

travis encrypt MY_SECRET_ENV=super_secret --add

That will generate a secret value for our variable and automatically add it to our .travis.yml file as a global environment variable. In our case, I encrypted ACCESS_TOKEN and DEVICE_ID and the result can be seen below:

  - hw_plat=photon
  - hw_plat=electron
  - secure: B5a4LsDh0PTrZgD1iM8MescmcAoHdjFgK5HLmYiO51OdwBqypqnZ1WTP7M9nqCzwSs2f1IOQKmvDBT2XZaZ//nhl3IlUUFvS8pXFeRMJ2USYtKVe5Ld+umICM6xWTV0CWT4R4a/BvQgV7bUwUO04CqgnG2PjC5\
  - secure: AgsA3pWdZytNF/eVfhbiq1ltLrJV91bdKUT/U3oaXyEFQDFBe6YZgdaDtiWexF/gjeexoEin0z2hh8zZWhNgMVwzasF7jMp2lwKFb9PTs4N4C1P+FBNXwOf8b1oruaqWOXJVlvzKOuLNR8m82bfD/yncCAQvnl\

I no longer need the hw_plat=electron option (so long electron...) since we used it only for demonstrating how Travis CI can create separate virtual environments from a matrix of variables. So we can move the hw_plat=photon under the globals section.

The next step is to create a tests.sh file to store all our tests. Initially, we'll only add one command, a request for an OTA update of our device with the firmware application that we created in the previous steps:


# ota update                                                                                                                                                               
curl -X PUT "https://api.particle.io/v1/devices/$DEVICE_ID?access_token=$ACCESS_TOKEN" -F file=@firmware/modules/particleGeoLoc/target/particleGeoLoc.bin -F file_type=bin\

And we update the script phase of our .travis.yml file with the addition of the new script as well:

- echo "---> script phase..."
- bash build.sh
- bash tests.sh

After pushing the changes in GitHub, the Travis CI is triggered and the virtual Linux environment starts with the following info in the terminal window:

Image title

So it seems that our secret variables have been set correctly.

The Travis CI build was finished without errors, but I don't have any indication whether the firmware update was done or not. We need to create a much more intelligent test script that can handle this case. For example, one solution may be to subscribe to device events and specifically to those that are related to flash updates. But then we have to handle the stream of events coming from a particle device and we also we have to make sure that we support not only one but unlimited devices. I can become very complex and we are still talking about the first test case, the OTA update.

In the end, I have to write most of the logic on my own, and Travis CI seems unable to provide me any help on this. It's clear that Travis CI is not designed for IoT. It's not designed to look behind the REST API, to check for the actual players of the test case. It only provides a method to execute a generic test case, not really IoT-friendly, I would say. However, I can see the benefit when Travis CI is used on a web application and the deployment will take place on a cloud service.

Currently, the only way to see what happened is outside Travis CI, using the Particle Console, where we can see various device events. There, I was able to find the following information:

Image title

That is a good indication that the device has been updated with new code. Great!

By adding more requests from the Particle API in the tests.sh, we can create a number of test cases, but as I said earlier, it's like reinventing the wheel and starting everything from scratch. I have to be very familiar with the Particle REST API, I have to create the logic for getting device responses, I have to support multiple devices for each test case, and in the end, I have to maintain all of these. Oh, man... where is the CI/CD automation? Where is that button that creates everything?

Another interesting point is the way the build history of Travis CI is used. Let's take a look at the build history of our vehicle-tracking application for example:

Travis CI Build History

In the build history above, we can see the branch that we are working, the commit message, the GitHub commit, a timestamp, the build time and a 'restart' button to re-run the build. All these are cool, but what's missing? The answer is to have fun with this build history and make it interactive — to be able to create a new build by duplicating an old one and adding a different commit or a different test case. Why? Because most of the time, the source code and the test cases are developed in parallel, and when a regression is introduced, we are never sure whether the error is in the source code or in the test case.

It would be very cool to be able to go back to the previous test case (and Travis CI configuration) but with the current source code commit. Or the other way around, to be able to go back to a previous source code version with my current tests/configuration and see who needs to be hanged. For a web application, this may not be important, but for an embedded application, this may take half a day or more. This feature would be a killer for IoT device development. OK... don't get over excited, it doesn't exist yet, at least not on Travis CI.


I believe Travis CI can be partially used in IoT. It's also limited only to GitHub repositories. We can use it to create device binaries for example (assuming that we like the idea to be stored as git tags or in Amazon S3) but even then, we need to know all the internals of how to build our device binary.

When it comes to further integration and automation with testing, I don't believe it's a good fit for IoT device development. It can only provide basic support for someone to create their own world. There is no guidance on how to create it, nor any automation for IoT. Although for educational purposes or fun IoT projects, it may worth using it to offload the build process from you.

What's Next?

That's all for today! In the next article, we continue our search for the best DevOps solution for IoT with GitLab CI/CD. Stay tuned!

continuous deployment, continuous integration, devops, embedded devices, iot, iot development, iot devices

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}