Fast Prototyping: Breaking Up the Monolith
We wrap up this microservices mini-series by showing how to divide up the monolith we built with an API gateway and inter-service communication.
Join the DZone community and get the full member experience.
Join For FreeIn the last part of this mini-series, we will deal with dividing the previously built monolith into loosely connected microservices. If you have not read the previous parts, I suggest reading at least the assumptions described in the first article:
Fast prototyping: Breaking Up the Monolith
The current architecture of our system is presented in the diagram below.
API Gateway
One of the key challenges that we face during every refactoring of the existing system is to ensure that services for current users remain unchanged. This is important for many reasons, but in this case, let us consider just one: another team is building an application that uses our API and its change would involve additional costs.
This type of situation is solved by the Gateway API pattern, used in our case as a facade for the functionality of the system below.
We will build our Gateway API using NGINX configured as a reverse proxy. Using the Docker container, all we need to do is provide the appropriate configuration file.
#nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
location /admin/ {
proxy_pass http://myhost:8080/admin/;
}
location /api/cs/ {
proxy_pass http://myhost:8080/api/cs/;
}
location /api/cm/ {
proxy_pass http://myhost:8080/api/cm/;
}
location /api/auth/ {
proxy_pass http://myhost:8080/api/auth/;
}
location /api/user/ {
proxy_pass http://myhost:8080/api/user/;
}
location /api/task/ {
proxy_pass http://myhost:8080/api/task/;
}
location / {
proxy_pass http://myhost:8080/;
}
}
}
Running the API Gateway
$ docker run --name my-nginx -p 9090:80 \
--add-host=myhost:192.168.0.4 \
-v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro \
-d nginx
To enable the Docker container to make connections to the service running outside the container, we must provide its temporary name (the one we use in nginx.conf) and the IP address using the --add-host option.
Notice: You need to determine your host IP address (in the example 192.168.0.4). Linux users can use the ifconfig command.
If we now run our system built as a monolith (as described in the previous article), its API and web application should be available on port 9090. So, we can start dividing it into separate microservices without worrying about access to the whole solution.
Dividing the System
The system built by us using the Cricket Microservices Framework consists of well-separated adapters that can communicate with each other using event objects. If the adapters use database tables, then these tables are dedicated to them. Webapp also does not refer to any component directly, but uses AJAX technology to call the REST API.
Thanks to such design, we can start by running several copies of the service, each one using a different IP port. By configuring the API Gateway, we separate target microservices by directing requests to dedicated instances of the service (listening on different ports).
Let's look at the new configuration file:
#nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
location / {
proxy_pass http://myhost:8080/;
}
location /admin/ {
proxy_pass http://myhost:8080/admin/;
}
location /api/cs/ {
proxy_pass http://myhost:8081/api/cs/;
}
location /api/cm/ {
proxy_pass http://myhost:8081/api/cm/;
}
location /api/auth/ {
proxy_pass http://myhost:8081/api/auth/;
}
location /api/user/ {
proxy_pass http://myhost:8081/api/user/;
}
location /api/task/ {
proxy_pass http://myhost:8082/api/task/;
}
}
}
As we can see, the API Gateway routes requests to one of the 3 running instances:
Web applications running on port 8080
Administration service (user and access management, content management and content service) on port 8081
Task service on port 8082
So, to run independent copies of the previously created service, let's use the following procedure three times:
1. Copy the service to the dedicated location.
2. Modify configuration file setting required port and serviceurl parameters — see the cricket.json configuration file fragment below.
{
"@type": "org.cricketmsf.config.ConfigSet",
"description": "This is sample configuration",
"services": [
{
"@type": "org.cricketmsf.config.Configuration",
"id": "Microsite",
"service": "org.cricketmsf.services.Microsite",
"properties": {
"host": "0.0.0.0",
"port": "8081",
"serviceurl": "http://localhost:9090",
/ ... CUT ... /
},
"adapters": {
/ ... CUT ... /
}
}
]
}
3. Run the service instance
Note: The final version of the monolith website, containing all the changes described in this article, is available on GitHub.
After running all three instances in separate terminal windows, we can refer to web applications http://localhost:9090 , http://localhost:9090/admin, http://localhost:9090/tasks and observe requests directed by the Gateway API to the appropriate services.
The Final Breaking
From now on, we can develop each microservice separately from the others, changing its functionality or even the technology in which it will be developed. We should start by removing unused adapters from the configuration and code of each of them.
We will illustrate such changes by transferring the functionality of sending notifications about the creation of a new task from the microservice in which the event occurred to another microservice.
Inter-Service Communication
As our monolith was event-driven, we must ensure that also in the case of its division into separate microservices, all events will be handled by a dedicated service, regardless of where the event was created.
This can be achieved, for example, by using MQTT message broker as the event bus to which all services will be connected.
In our example, we will use Eclipse Mosquitto and its publicly available test server test.mosquitto.org
Starting with release 1.2.46, Cricket provides MqttPublisher and MqttSubscriber adapters and MQTT protocol support for the event Dispatcher. These features can be used to share events.
What we must do is:
Configure Dispatcher of the Task API service (running on port 8082) to send MQTT message when a new Task is created.
Add MqttSubscriber adapter to the Admin API service (running on port 8081) to listen for MQTT messages related to tasks.
Add an event handler method to the Admin API service, to sent message to the Slack channel.
These relevant configuration and source code fragments are doing this.
{
"@type": "org.cricketmsf.config.ConfigSet",
"description": "This is sample configuration",
"services": [
{
"@type": "org.cricketmsf.config.Configuration",
"id": "Microsite",
"service": "org.cricketmsf.services.Microsite",
"properties": {
"host": "0.0.0.0",
"port": "8082",
"serviceurl": "http://localhost:9090",
"servicename": "TaskAPI",
/ ... CUT ... /
},
"adapters": {
"Dispatcher": {
"name": "Dispatcher",
"interfaceName": "DispatcherIface",
"classFullName": "org.cricketmsf.out.MqttDispatcher",
"properties": {
"url": "tcp://test.mosquitto.org:1883",
"qos": "1",
"root-topic": "org.cricketmsf/events/",
"event-types": "TASK/CREATED,
"debug": "true"
}
}
/ ... CUT ... /
}
}
]
}
{
"@type": "org.cricketmsf.config.ConfigSet",
"description": "This is sample configuration",
"services": [
{
"@type": "org.cricketmsf.config.Configuration",
"id": "Microsite",
"service": "org.cricketmsf.services.Microsite",
"properties": {
"host": "0.0.0.0",
"port": "8081",
"serviceurl": "http://localhost:9090",
"servicename": "AdminAPI",
/ ... CUT ... /
},
"adapters": {
"MqttSubscriber": {
"name": "MqttSubscriber",
"interfaceName": "Adapter",
"classFullName": "org.cricketmsf.in.mqtt.MqttSubscriber",
"properties": {
"url": "tcp://test.mosquitto.org:1883",
"qos": "1",
"root-topic": "org.cricketmsf/events/",
"topic-filter": "#",
"type-suffix": "",
"debug": "true"
}
},
"Notifier": {
"name": "Notifier",
"interfaceName": "NotifierIface",
"classFullName": "my.example.SlackNotifier",
"properties": {
"url": "https://hooks.slack.com/services/YOUR_SLACK_WEBHOOK/HERE",
"ignore-certificate-check":"true"
}
}
/ ... CUT ... /
}
}
]
}
// Microsite.java (Admin API service)
@EventHook(eventCategory = "TASK")
public void handleTicketingEvent(Event event) {
if(event.getType().equalsIgnoreCase("CREATED")){
notifier.send(""+event.getPayload());
}
}
Based on this configuration, when a new Task is created, the Task API service sends MQTT message to the org.cricketmsf/events/TASK/created topic with the event payload.
The Admin API MqttSubscriber adapter receives the MQTT message and creates the local event of category and type determined by the message topic. Then the event is handled by the handleTicketingEvent, sending the message to the Slack channel.
This is how our system looks after all changes have been made. It differs a bit from the project presented in the first article, but it results from the desire to reduce the complexity of the solution to make it easier to discuss.
Summary
We have finished our short journey from monolith to microservices. I hope you enjoyed the trip.
The microservice approach assumes fast building of small and easy to maintain services. Therefore, the tools used to build them should also be easy to learn and use. Java can successfully meet these criteria as long as we use it properly.
I would like to encourage you to try Cricket, which is one of the lightest microservices frameworks and can also be a good tool for backend and frontend systems prototyping.
Published at DZone with permission of Grzegorz Skorupa. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Automated Multi-Repo IBM App Connect Enterprise BAR Builds
-
Turbocharge Ab Initio ETL Pipelines: Simple Tweaks for Maximum Performance Boost
-
Data Freshness: Definition, Alerts To Use, and Other Best Practices
-
How To Check IP Addresses for Known Threats and Tor Exit Node Servers in Java
Comments