Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Microservices With Netflix OSS, Apache Kafka, and Docker: Part 2, Microservices With Apache Kafka and Netflix OSS

DZone's Guide to

Microservices With Netflix OSS, Apache Kafka, and Docker: Part 2, Microservices With Apache Kafka and Netflix OSS

This microservice building tutorial continues by walking through Eureka service discovery, the config server and user service, and configuring the email service.

· Microservices Zone ·
Free Resource

Containerized Microservices require new monitoring. See why a new APM approach is needed to even see containerized applications.

This is Part 2 of the blog series on building microservices with Netflix OSS and Apache Kafka. The first part on installing Apache Kafka in a Docker container is published here.

Before this, I have also published a blog on Docker, so if anyone wants to take a look at Docker, they can read it here.

In this blog, we will create a microservice to send an email after a user registers successfully. This is a very simple implementation to explain the Netflix OSS stack and the concept of building a microservice. Therefore, my thrust is not on building a registration module.

You will need a bit of background on Netflix OSS to understand the different components under it and I would suggest reading some material before starting hands-on with this blog. If you would like me to publish an article on Netflix OSS, then let me know through the comments. I will be happy to do so.

Solution & Approach

  1. Service Registry (Eureka) - We will use Eureka to register all services and discovery.
  2. Config Server - This is a server where all the configuration will be stored. All services will read configuration from this server. We are going to use a GitHub repository for this purpose.
  3. Zuul - Zuul is the gateway service or load balancer which is used to proxy web requests to the backend services. It provides dynamic routing, monitoring, resiliency, and security.
  4. User Service - This service will register a new user and will send the message to the Kafka broker.
  5. Email Service - This service will consume the message from the Kafka broker and send the Registration Success email notification to the user.

Prerequisites for building the microservices project:

  • Maven
  • Spring tool suite, or any other IDE for Spring Boot development

Service Registry (Eureka)

One of the first things needed is a central Service Registry which allows service discovery. Implementing a Eureka Server for use as service registry is quite easy and involves the following steps:

Adding spring-cloud-starter-eureka-server to the dependencies in pom.xml:

  <dependency>
      <groupId>org.springframework.cloud</groupId>
      <artifactId>spring-cloud-starter-eureka-server</artifactId>
 </dependency>


Enable the Eureka Server in a main *Application file and annotate it with @EnableEurekaServer:

@SpringBootApplication
@EnableEurekaServer
public class EurekaServerApplication {
       public static void main(String[] args) {
              SpringApplication.run(EurekaServerApplication.class, args);
       }
}


Configure the properties, like define server/port, etc. In bootstrap.yml, define the server port and application name as below:

server:
  port: 8761
spring:
  application:
    name: discovery-service


In application.yml, define the standard configuration for the dev env as below:

eureka:
  client:
    register-with-eureka: false
    fetch-registry: false


Note: Setting both register-with-eureka and fetch-registry to false will stop the registry that there are no replica nodes. Don’t forget to set them back to true in a production env.

Build and execute the project from cmd prompt:

  • mvn clean install
  • Go to the target folder
  • Run java –jar EurekaServer-0.0.1-SNAPSHOT.jar

Test Eureka by running in browser – http://localhost:8761

Image title

Config Server

As is always the case, we will have to support the application in multiple environments, and therefore, a need for a config server to host the configuration files. In this case, we are going to use a Git repository to store the configuration files for each service. When the service is started, it will ask for configurations.

We will create a project in the Git repository with the name EmailServiceConfigProperties. The structure of this project would be like this:

EmailServiceConfigProperties
      email-service
            default
                  email-service.yml
            dev
                  email-service.yml
            prod
                  email-service.yml
      user-service
            default
                  user-service.yml
            dev
                  user-service.yml
            prod
                  user-service.yml


This project is available in the Git repository, so do note to change the properties for the Kafka broker and Gmail smtp server.

The config server will be built as a discovery service (Eureka) client. So when a service is started, it will take the config server location from the discovery service.

Add the dependencies for the config server and discovery client to the pom.xml, as given below:

   <dependency>
         <groupId>org.springframework.cloud</groupId>
         <artifactId>spring-cloud-config-server</artifactId>
   </dependency>
   <dependency>
         <groupId>org.springframework.cloud</groupId        <artifactId>spring-cloud-starter-eureka</artifactId>
   </dependency>


Enable the config server using the annotation @EnableConfigServer and Eureka discovery using the annotation @EnableEurekaClient in the main *Application file in the project.

@EnableEurekaClient
@EnableConfigServer
@SpringBootApplication
public class ConfigServerApplication {
      public static void main(String[] args) {
             SpringApplication.run(ConfigServerApplication.class, args);
       }
}


Configure properties like define server/port, etc. In bootstrap.yml, define the server port and application name as below:

server:
  port: 8888
spring:
  application:
    name: config-server


Note: By default, the config server runs on port 8888.

In application.yml, define the Git repository path of the project storing configuration files as given below:

spring:
  profile:
    active: dev
  cloud:
    config:
      server:
        git:
          uri: https://github.com/srast3/spring-kafka-ms.git/
          search-paths:
              - "EmailServiceConfigProperties/{application}/{profile}"


Build and execute the project from cmd prompt:

  • mvn clean install
  • Go to the target folder
  • Run java –jar ConfigServer-0.0.1-SNAPSHOT.jar

Test ConfigServer and you will see that it returns the configuration properties from the dev folder in the Git repository – http://localhost:8888/email-service/dev

Since it is discovered through Eureka service registry, it will also show in the Eureka discovery console:

Image title

User Service

This service will register the user and produces a message to the Kafka broker. The source code for the user service is available on GitHub. This service will

  1. Register itself to the Service registry (Eureka).
  2. Take its configuration from the Config server.
  3. On every registration, the user service will send a message to the Kafka broker.
  4. Store the registered user in H2 database.
  5. Let's create the Spring Boot project for the user service with the name UserAccount.
  6. Add the following dependencies to pom.xml:
<dependency>
     <groupId>org.springframework.cloud</groupId>
     <artifactId>spring-cloud-starter-config</artifactId>
</dependency>
<dependency>
     <groupId>org.springframework.cloud</groupId>
     <artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
     <groupId>org.springframework.boot</groupId>
     <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
     <groupId>org.springframework.kafka</groupId>
     <artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
     <groupId>com.h2database</groupId>
     <artifactId>h2</artifactId>
     <scope>runtime</scope>
</dependency>


Let's enable service discovery by adding the annotation @EnableEurekaClient.

@EnableEurekaClient
@SpringBootApplication
public class UserAccountApplication {
     public static void main(String[] args) {
            SpringApplication.run(UserAccountApplication.class, args);
     }
}


In bootstrap.yml, we will enable cloud config discovery and add the port and application name.

server:
  port: 8081
spring:
  application:
    name: user-service
  profiles:
    active: dev
  cloud:
    config:
      discovery:
        enabled: true
        service-id: config-server


In the configuration file user-service.yml, define the Kafka broker and H2 database properties.

spring:
  h2:
    console:
      enabled: true
      path: /h2-console
  datasource:
    url: jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE
    username: sa
    password:
  kafka:
    bootstrap-servers: 192.168.99.100:9092
    topic:
      userRegistered: USER_REGISTERED_TOPIC
security:
  basic:
    enabled: false


Now let's discuss the project structure. In the user service project, we need to define an entity which will represent the data structure and will be used to transfer the data.

/User.java

@Entity
@Table(name = "\"User\"")
public class User {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    @NotNull
    private String userName;
    @NotNull
    private String password;


We will use Spring data to handle the CRUD operations on the user entity, so we will define a UserRepository.

/UserRepository.java

public interface UserRepository extends CrudRepository<User, Long> {
}


Then there will be a service class which will actually do the dirty work of either registering a user or fetching all users based on the call.

public interface UserService {
    User registerUser(User input);
    Iterable<User> findAll();
}


Service implementation of this interface will have the logic to send a message to Kafka and save the new user in the database.

Next, we will define a REST controller and will expose the rest endpoints.

/UserController

@RestController
@RequestMapping(value = "/", produces = MediaType.APPLICATION_JSON_VALUE)
public class UserController {


In this class, create a GET /member method to fetch all the registered users:

@RequestMapping(method = RequestMethod.GET, path = "/member")
    public ResponseEntity<Iterable<User>> getAll() {
        Iterable<User> all = userService.findAll();
        return new ResponseEntity<>(all, HttpStatus.OK);
    }


Create a post method – /register to register a new user:

    @RequestMapping(method = RequestMethod.POST, path = "/register")
    public ResponseEntity<User> register(@RequestBody User input) {
        User result = userService.registerUser(input);
        return new ResponseEntity<>(result, HttpStatus.OK);
    }


Now, let's characterize the sender configuration which produces the message to Kafka Topic. First, we will define a KafkaTemplate. The Kafka template will require ProducerFactory, which will set the strategy to produce a Producer instance. The ProducerFactory will need a Map of configuration properties, the most essential of which are BOOTSTRAP_SERVERS_CONFIG, KEY_SERIALIZER_CLASS_CONFIG, and VALUE_SERIALIZER_CLASS_CONFIG.

/SenderConfig.java

@Configuration
public class SenderConfig {

    @Value("${spring.kafka.bootstrap-servers}")
    private String bootstrapServers;

    @Bean
    public Map<String, Object> producerConfigs() {
      Map<String, Object> props = new HashMap<>();
      props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
      props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
      props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
      return props;
    }

    @Bean
    public ProducerFactory<String, User> producerFactory() {
      return new DefaultKafkaProducerFactory<>(producerConfigs());
    }

    @Bean
    public KafkaTemplate<String, User> simpleKafkaTemplate() {
      return new KafkaTemplate<>(producerFactory());
    }

    @Bean
    public Sender sender() {
      return new Sender();
    }
}


The String message payload is transformed from a User object with the help of JsonSerializer. Finish with the implementation of Sender bean which will use the KafkaTemplate configured above.

/Sender.java

public class Sender {
    private static final Logger LOGGER = LoggerFactory.getLogger(Sender.class);
    @Autowired
    private KafkaTemplate<String, User> simpleKafkaTemplate;
    public void send(String topic, User payload) {
      LOGGER.info("sendingoad='{}' to topic='{}'", payload, topic);
      simpleKafkaTemplate.send(topic, payload);
    }
}


Email Service

Next, we will build the microservice for sending emails upon successful registration. This microservice will listen to the topic that comes from the user service. For this purpose, we will create a DTO object and will configure the Kafka consumer to transform the incoming payload to it.

As a first step, let's create a Spring project and add the required dependencies to pom.xml. Required dependencies are Eureka Discovery, JPA, H2, Kafka, and Config Client.

<dependency> 
    <groupId>org.springframework.cloud</groupId> 
    <artifactId>spring-cloud-starter-config</artifactId> 
</dependency> 
<dependency> 
    <groupId>org.springframework.cloud</groupId> 
    <artifactId>spring-cloud-starter-eureka</artifactId> 
</dependency> 
<dependency> 
    <groupId>org.springframework.boot</groupId> 
    <artifactId>spring-boot-starter-data-jpa</artifactId> 
</dependency> 
<dependency> 
    <groupId>org.springframework.kafka</groupId> 
    <artifactId>spring-kafka</artifactId> 
</dependency> 
<dependency> 
    <groupId>com.h2database</groupId> 
    <artifactId>h2</artifactId> 
    <scope>runtime</scope> </dependency> 
<dependency> 
    <groupId>org.springframework.boot</groupId> 
    <artifactId>spring-boot-starter-mail</artifactId> 
</dependency>


We are going to use Gmail smtp to send the email for this project, and related properties are configured in email-service.yml in the config repository in GitHub.

 mail:
   host: smtp.gmail.com
   port: 587
   username: <userid>
   password: <password>
   properties.mail.smtp:
     auth: true
     starttls.enable: true


Then, we need to configure ConsumerFactory to be able to consume messages from the broker.

/ReceiverConfig.java

@Configuration
@EnableKafka
public class ReceiverConfig {
    @Value("${spring.kafka.bootstrap-servers}")
    private String bootstrapServers;
    @Bean
    public Map<String, Object> consumerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, "UserRegisteredConsumer");
        return props;
    }
    @Bean
    public ConsumerFactory<String, UserDTO> consumerFactory() {
        return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
        new JsonDeserializer<>(UserDTO.class));
    }
    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, UserDTO> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, UserDTO> factory =
        new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        return factory;
    }
    @Bean
    public Receiver receiver() {
         return new Receiver();
    }
}


Now, lets create the Kafka listener which will be invoked when a message is received

/Receiver.java

public class Receiver {
    private static final Logger LOGGER = LoggerFactory.getLogger(Receiver.class);
    private CountDownLatch latch = new CountDownLatch(1);
    @Autowired
    private EmailService emailService;
    @KafkaListener(topics = "${spring.kafka.topic.userRegistered}")
    public void receive(UserDTO payload) {
        LOGGER.info("receivedoad='{}'", payload);
        emailService.sendSimpleMessage(payload);
        latch.countDown();
    }
}


As done earlier for the user service, we will have to define a service implementation class which will take the incoming request, transform it, and send the email.

/EmailServiceImpl

@Component
public class EmailServiceImpl implements EmailService {
    @Autowired
    public JavaMailSender emailSender;
    @Autowired
    public MailRepository mailRepository;
    @Override
    public void sendSimpleMessage(UserDTO input) {
       try {
          Mail newMail = new Mail();
          newMail.setTo(input.getUserName());
          newMail.setSubject("Registration Success");
          newMail.setText("Welcome, You have successfully registered!");

          SimpleMailMessage message = new SimpleMailMessage();
          message.setTo(newMail.getTo());
          message.setSubject(newMail.getSubject());
          message.setText(newMail.getText());

          mailRepository.save(newMail);
          emailSender.send(message);
       } catch (MailException exception) {
          exception.printStackTrace();
       }
    }
}


To test if everything is fine, build and run the project

  • Verify Service registry (Eureka) is running (http://localhost:8761)
  • Verify that the config server (Spring Cloud Config) is running and user-service and email-service configuration are available (http://localhost:8888/user-service/default)
  • Build the project: mvn clean install
  • Run java-jar EmailService-0.0.1-SNAPSHOT.jar

We also need to start the Docker container for Zookeeper and Kafka. There is a slight change in the command we used to run the Docker container for Kafka in the previous blog:

docker run -d --name kafka --network kafka-net --publish 9092:9092 --publish 7203:7203 --env KAFKA_ADVERTISED_HOST_NAME=192.168.99.100 --env ZOOKEEPER_IP=zookeeper ches/kafka


Even though the command in the previous blog was correct, we need to set KAFKA_ADVERTISED_HOST_NAME to the Docker machine IP, otherwise the Kafka broker is not visible and our microservices cannot connect to the Kafka broker.

Once each microservice is set up and started correctly, you can test the complete flow by creating a new user by calling the URL POST http://localhost:8081/register.

Verify that the new user is created.

Image title

You can also verify the user by calling GET http://localhost:8081/member .

Verify that the registration success email was received at your email address.

So this is it! We have successfully created our first microservice project with Netflix OSS, Apache Kafka, and Spring Boot. The complete code for this microservice implementation is available on GitHub.

In the rest of the blog series, I will publish articles on the following topics, so stay tuned:

  1. Zuul, which acts as a gateway or load balancer.
  2. Deploying microservices in Docker containers.
  3. Deploying microservices in AWS.

Please feel free to comment with your opinion about this article and any other suggestions. Also if you like this article, don’t forget to like and share this article with your friends. Thanks!

Happy Coding!

Automatically manage containers and microservices with better control and performance using Instana APM. Try it for yourself today.

Topics:
microservices ,docker ,tutorial ,netflix oss

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}