Over a million developers have joined DZone.

Putting the 'Micro' Into Microservices With Raspberry Pi

DZone's Guide to

Putting the 'Micro' Into Microservices With Raspberry Pi

A brilliant post on building real life microservices on Raspberry Pis, including service discovery, scalability, and the use of JAX-RS.

· IoT Zone
Free Resource

Discover why Bluetooth mesh is the next evolution of IoT solutions. Download the mesh overview.

I’ve given several talks on microservices lately (at JAX London, JFokus, and JavaLand, all wonderful conferences). Microservices are an interesting way of dealing with some of the complexity in a non-trivial software project by partitioning the system and, like many of the technology choices we make, there are trade-offs.

I wanted to explain these trade-offs in my talks by ensuring they were all included in my demo. This was — I hope — educational, but it was definitely not good for my blood pressure. I don’t want to duplicate the great microservices content already out there, but let me share the three most critical learnings for me.

Distributed Systems and Failure

Microservices replaces monolithic systems with distributed systems. Almost by definition, this means an explosion of moving parts. In a demo context, when everything is running on a single laptop, it’s easy to forget that a microservices architecture really is a system with lots of different things trying to communicate with one another over an unreliable network. Even in a ‘real’ system, with virtualization and containers, it’s not always obvious how much complexity is involved in the aggregated system — as long as things work well. After all, the reason the fallacies of distributed computing are known as fallacies is because they’re assumptions we all tend to make.

I decided to really put the ‘micro’ into ‘microservices’, so I prepared a system of Raspberry Pis and pcDuinos. WebSphere Liberty is so lightweight that it can easily run on a Pi, and it’s so small and cheap that I can easily build up a collection of computers. I called it the ‘data center in a handbag.’ Because each machine really is a machine, the topology is more obvious:


The complexity of the connections also becomes really obvious (this picture isn’t staged!):


Shortly after this picture was taken I switched to using wifi for network communication, just because I couldn’t deal with the ethernet cables anymore. Having lots of little computers makes the nature of microservices architectures more tangible, but it also makes getting a demo working a lot more challenging. To be honest, this is kind of a good thing. Building a solid microservices architecture means building in failure tolerance — trialing such an architecture on a bunch of embeddable computers ensures this tolerance definitely isn’t forgotten! (Remember, you’re replacing the API calls in your application with HTTP calls in a system.)

Service Discovery

The nice thing about a data center in a handbag is it’s pretty portable; I’ve taken mine all over Europe. To keep things portable I used DHCP, so every time I moved my system, all the IP addresses changed.

In a distributed topology, the IP addresses of service providers may not be known

This made service discovery vital. Service discovery allows microservices to talk to other microservices without hard-coding the IP address or hostname in advance.

There are a few reasons why IP addresses aren’t good enough. If a system goes down, its replacement would have to have the exact same address; otherwise, all calls to the service from other components would fail, even though a replacement service was up. Because my computers would be moving between different subnets (and buildings, and countries!) they really had to use DHCP, and that’s incompatible with having a fixed IP address. More seriously, if we want to scale out a service, we don’t have any way of distributing traffic across the service instances if the endpoint IP addresses are hardcoded.

distributed topology where each service has a dedicated DNS entry

We can get some of the way if we have good control of our DNS registry, and a load-balancer also helps — but at this point, we’re halfway to a service discovery solution.

Service discovery is an emerging technology, and there are a number of popular solutions. At the moment, Eureka, Apache Zookeeper + Curator, Kubernetes, etcd, and Consul are all popular. Consul by Hashicorp is growing in popularity and has a number of attractive features, including broad platform support and a DNS interface. Bluemix also includes a beta service discovery feature, and a complementary service proxy feature (also in beta).

I noticed that almost every service discovery framework left responsibility for registering services with the application itself. Being lazy, and with five applications to manage, this seemed like a lot of hard work. The application server knows whether it’s up (obviously), and it knows exactly what REST endpoints it’s exposing, so why not extend it to handle service registration?

WebSphere Liberty has really good extensibility, so it was easy to hook into the start and stop events for each application which exposed JAX-RS endpoints. Then the extension can re-use the annotation scan already done by the server to work out the names of each REST endpoint and register it as a service. I use Consul as my service registry but the same principle could be used for any registry. The source code for the plug-in is available on GitHub. (As well as demonstrating how to integrate Consul and Liberty, it’s a useful sample on how to hook Liberty’s annotation scanning, which has all sorts of uses.)

Scaling Out to the Cloud

While it’s a lot of fun, the data center in a handbag probably isn’t the most realistic topology for a production system, and the Raspberry Pis were a bit too good at demonstrating the distributed computing fallacies. I also deployed the same set of applications across four Bluemix nodes running Liberty.

I had to make a couple of changes to accommodate the different environment in a managed cloud. In particular, when things are running in a container, the IP address that applications running inside the container see is not the same one as the rest of the world sees. This breaks service discovery self-registration. It turns out this is a common problem for service discovery in containers.

It also happens that knowing the IP address of a server is less important in Bluemix; both Cloud Foundry applications and container groups have a named route, and that can be used to address the service. Bluemix also takes care of load balancing, so a single route may be served by a number of servers. This made service discovery less essential, but still useful:

  • I still wanted to track how many of my services were up on the Consul dashboard.
  • I wanted to support a hybrid environment with Raspberry Pis and Bluemix services working together, and that definitely needed service discovery. For example, a web front-end running on a Pi could talk to a Consul server running on a Bluemix Docker container, and use some services running on other Pis as well as some services running on Bluemix.
  • I wanted common code for both the handbag and cloud topologies.

Normally, the only way to let a containerised application register itself with a service registry is to tell it what IP address or hostname it should use, via an environment variable. Bluemix does this for us, so the public hostname of an application is available in the VCAP_APPLICATION environment variable. It’s also injected into Liberty’s server.xml as ${application_uris}, which is even more convenient.

I added an extra check to my hostname introspection logic to check for the VCAP_APPLICATION environment variable and parse it, if available. (Because it’s integrated with the Bluemix infrastructure, the Bluemix Service Discovery service takes care of the mapping between the private IP address and the public one, so services register themselves using their private IP address, but are advertised by the registry using the public one.)

JAX-RS is Pretty Awesome

The first thing I did in my demo was refactor a monolith into microservices. There are a lot of things which can make this really hard: testing changes completely, poor encapsulation will have to be sorted out, and the DevOps pipeline will be a completely new beast. However, one thing was actually pretty easy: I’d used CDI to handle communication between the different elements of my original application and it was basically a one-for-one swap to using JAX-RS to publish and consume RESTful services.

JAX-RS is part of the Java EE specification and is, of course, built-in to Liberty. It does a great job of abstracting away the details of REST. Even though my services were going over the wire using HTTP and JSON, none of that was exposed to my code. To publish, I didn’t have to change the APIs at all, just swap out the CDI annotation for a JAX-RS one.


To consume a service took about three lines of code:

Client client = ClientBuilder.newClient();
WebTarget target = client.target(“http://catastrophe-cats.mybluemix.net:9080").path(“/rest/cats”);
Set<Cat> cats = target.request(MediaType.APPLICATION_JSON).get(new GenericType<>(Set.class));

If you’re wondering about the names — all good internet content involves cats. Mine was a live demo, and I have some experience of live demos, so I called it — of course — cat-astrophe.

Final Thoughts

So what have I learned at the end of my microservices adventure? Microservices are hard work. If you’re thinking about microservices, you better think about what infrastructure you’re going to put in place to support service discovery, circuit breakers, logging, and testing. Our industry is moving really fast, so the best solution at the beginning of my journey (like Consul), wasn’t necessarily the best solution by the time I’d got to the end (now that we have Bluemix Service Discovery). Oh, and Liberty extensibility rocks. But we knew that already, right?

Take a deep dive into Bluetooth mesh. Read the tech overview and discover new IoT innovations.

service ,ip address ,service discovery ,microservices ,raspberry pi

Published at DZone with permission of Holly Cummins, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}