{{announcement.body}}
{{announcement.title}}

MiBand 3 and React-Native (Part 3): Docker, Spring Cloud, and Spring Boot

DZone 's Guide to

MiBand 3 and React-Native (Part 3): Docker, Spring Cloud, and Spring Boot

In this article, see how a React-Native application can collect MiBand data and transfer it to a real server.

· Cloud Zone ·
Free Resource

After significant work on my mistakes described in the second chapter of this series, I decided to move on to the final chapter. In this article, we will focus on my latest findings in the development of the server-side. I am going to show how a React-native application can collect MiBand data and transfer it to a real server.

My server will be based on a microservice solution that can be deployed easily because of docker-compose. For the last 5-8 years, microservices have become a trending solution for solving many issues in server-side development. Its significant capabilities in the scaling of infrastructure and efficient and minimal time consuming for request's processing, motivated me to implement a small server-side API, based on Spring Cloud.

First of all, let's check out the tasks that will be covered:

  1. The React-native application sends a request to authorize itself on server and returns a token for further access to server API
  2. The React-native application sends gathered data from MiBand 3 to server
  3. The server provides gathered information about a certain user on demand by a secured channel

Our server will work according to the following scheme:

Server scheme

As you can see, the scheme represents a common archetype of microservices based on Spring Cloud. Two general groups were assigned:

  1. Infrastructure services
  2. Business logic related services

More details about their nature you will find below but traditionally I'm about to start from client-side.

Client-Side Update

Talking about server-side development, I cannot forget the client-side since it must prepare some certain data and send it to our server via HTTPs. In terms of PoC, I decided to make a simple request that contains the following information:

  1. Latest HR value collected from MiBand 3
  2. Latest passed steps value collected from MiBand 3
  3. Device meta-information (device name, Mac address, firmware version)
  4. Time mark when a request has been created

Such features require some changes to our React-native UI. Generally, we can divide our task into a few parts:

  1. Assemble an async record storage to keep the records including tokens, collected data from MiBand and server
  2. Write a simple rest client to send requests on the server
  3. Modify UI screen and make auth, data send/receive scenarios possible

While working on the first task, I made the following code changes:

JavaScript
 




x
17


 
1
// constants for common usage among different views
2
// location: ./src/components/commons/global.jsx
3
export default {
4
    // Server Related Info
5
    SERVER_AUTH_URL_ADDRESS: '192.168.8.118:5000',
6
    SERVER_ACCOUNT_URL_ADDRESS: '192.168.8.118:6000',
7
    SERVER_DEVICE_URL_ADDRESS: '192.168.8.118:7000',
8
 
          
9
    // Async Storage Keys
10
    ACCESS_TOKEN_KEY: '@AccessToken',
11
    USERNAME_TOKEN_KEY: '@UserName',
12
    DEVICE_ID_KEY: '@DeviceId',
13
 
          
14
    // AUTH PROCESS DATA
15
    AUTHORIZED_STATE: 'authorized',
16
    UNAUTHORIZED_STATE: 'unauthorized'
17
};



To send some data to the server, we must know the server address in the network. You could spot one funny moment that globals.jsx holds static IPs and ports. In the real world, such a situation is a bit rare. The most common solution here is to use a domain name that will be translated into a public IP address. More than likely, the translated IP will point to a gateway service, as described below.

Keeping IP addresses private for development purposes is more than enough to verify basic work. Just remember to update them once the server IP has been changed again.

Link with MiBand

Tabs became new components this time. "react-native-material-bottom-navigation" does the trick. An idea to add tabs emerged almost at once. We must somehow initialize a Sign Up procedure and a token. Finally, records gathered from a device must be sent. No doubt, the current UI looks a little rough, but it's enough to show how React-Native interacts with a server.

The "Band" tab shows our classic screen related to data collecting. The "Band" tab looks like our first screen from, where we could find a MiBand 3 device, pair with it, and get some data in real-time. Since then, nothing has changed significantly.

From the official source, AsyncStorage is an unencrypted, asynchronous, persistent, key-value storage system that is global to the app. A typical use case for it is as follows:

  1. Declare import {AsyncStorage} from 'react-native' in the source file where it's going to be used.
  2. Use "keys" to get read/write access for a certain element inside of storage.
  3. Create global constants that declare "keys" of that map, so any part of the client-side application will have access to our data

This approach is used in our app in the following pages: band, account, data share. General instructions to begin using storage are listed below:

JavaScript
 




xxxxxxxxxx
1
13


 
1
import globals from "../../common/globals.jsx";
2
import {AsyncStorage} from 'react-native';
3
//...
4
try {
5
    const accessToken = await AsyncStorage.getItem(globals.ACCESS_TOKEN_KEY);
6
    console.log('AceesToken: ' + accessToken)
7
    if (accessToken !== null) {
8
        this.setState({status: globals.AUTHORIZED_STATE})
9
    } else {
10
        this.setState({status: globals.UNAUTHORIZED_STATE})
11
    }
12
} catch (error) { console.log(error) }
13
//...



However, the code structure has been changed. Now, we use AsyncStorage to keep a paired deviceId:

JavaScript
 




xxxxxxxxxx
1


 
1
// Path: ./src/components/tab/band_connector/bandConnector.jsx
2
searchBluetoothDevices = () => {
3
        this.setState({ isConnectedWithMiBand: true})
4
        NativeModules.DeviceConnector.enableBTAndDiscover( (error, deviceBondLevel) => {
5
            this.setState({ deviceBondLevel: deviceBondLevel})
6
            AsyncStorage.setItem(globals.DEVICE_ID_KEY, (((1+Math.random())*0x10000)|0).toString(16).substring(1));
7
        })
8
        this.setState({ bluetoothSearchInterval: setInterval(this.getDeviceInfo, 5000) })
9
}



The deviceId will be transferred to the server later. The data screen was moved to a dedicated component with the same name:

JavaScript
 




xxxxxxxxxx
1
30


 
1
// Path: ./src/components/common/dataScreen/dataScreen.jsx
2
// ...
3
export default class DataScreen extends React.Component {
4
 
          
5
    render() {
6
        return (
7
            <View>
8
                <View>
9
                    <Text>Heart Beat:</Text>
10
                    <Text>{this.props.heartBeatRate + ' Bpm'}</Text>
11
                </View>
12
 
          
13
                <View>
14
                    <Text>Steps:</Text>
15
                    <Text>{this.props.steps}</Text>
16
                </View>
17
 
          
18
                <View>
19
                    <Text>Battery:</Text>
20
                    <Text>{this.props.battery + ' %'}</Text>
21
                </View>
22
 
          
23
                <View>
24
                    <Text>Device Bond Level:</Text>
25
                    <Text>{this.props.deviceBondLevel}</Text>
26
                </View>
27
            </View>
28
        );
29
    }
30
}



Nothing special was left here. The component itself is more or less ready to be re-used in the "Data-share" tab.

Credentials

The "Account" tab serves one purpose: pass authentication processes to our server and get from a corresponding user token. Once this is done, "unauthorized" becomes "authorized", and our AsyncStorage will contain that token plus the user's registered username. That token will be used every time by the "Data share" tab.

Below, you can find my current implementation of the fetch API. Username and password must be provided. Two .then instructions map our response into JSON format and invoke our getAccessToken function.

JavaScript
 




xxxxxxxxxx
1
23


 
1
// Path: ./src/components/common/rest/AccountRequests.jsx
2
// ...
3
signUp = (username, password) => {
4
        console.log('Account signUp: ' + username + ' ' + password)
5
        return fetch('http://' + globals.SERVER_ACCOUNT_URL_ADDRESS + '/accounts/', {
6
            method: 'POST',
7
            headers: {
8
                Accept: 'application/json',
9
                        'Content-Type': 'application/json',
10
            },
11
            body: JSON.stringify({
12
                username: username,
13
                password: password
14
            })
15
        })
16
        .then((response) => response.json())
17
        .then((responseJson) => {
18
            console.log('account signUp: ' + responseJson)
19
            this.getAccessToken(username, password)
20
        })
21
        .catch((error) => { console.error(error) });
22
    }
23
// ...



To perform a signUp operation, I created a special method you can see above. It locates a REST package. Its algorithm is pretty simple: touch the account REST endpoint, and if a proper response has been received, getAccessToken will be called, which is implemented below:

JavaScript
 




xxxxxxxxxx
1
36


 
1
// Path: ./src/components/common/rest/accountRequests.jsx
2
// ...
3
getAccessToken = (username, password) => {
4
        console.log('Account getAccessToken: ' + username + ' ' + password)
5
        var details = {
6
            "scope": "ui",
7
            "username": username,
8
            "password": password,
9
            "grant_type": "client_credentials"
10
        };
11
        
12
        var formBody = [];
13
        for (var property in details) {
14
          var encodedKey = encodeURIComponent(property);
15
          var encodedValue = encodeURIComponent(details[property]);
16
          formBody.push(encodedKey + "=" + encodedValue);
17
        }
18
        formBody = formBody.join("&");
19
 
          
20
        return fetch('http://' + globals.SERVER_AUTH_URL_ADDRESS + '/mservicet/oauth/token', {
21
            method: 'POST',
22
            headers: {
23
                Authorization: "Basic YnJvd3Nlcjo=",
24
                Accept: "*/*", 
25
                "Content-Type": "application/x-www-form-urlencoded",
26
            },
27
            body: formBody
28
        })
29
        .then((response) => response.json())
30
        .then((responseJson) => { 
31
            console.log('account getAccessToken: ' + responseJson)
32
            this.storeData(responseJson.access_token, username) 
33
        })
34
        .catch((error) => { console.error(error) });
35
}
36
// ...



This time, we are going to communicate with our auth service, which can bring us a fresh user token by providing a username and password. Instead of JSON, x-www-form-urlencoded body is used to protect vulnerable data. Finally, responseJson.access_token will contain a desirable token that will be used later.

Data from server

The data share tab was added to make communication with the device service on server side possible. Data screen was modified a bit to show server data mark additionally. Plus to said above new action buttons were added to provide:

1) Register Device

2) Send Data

3) Get Server

Basic scenario looks like:

1) Pair with your device

2) Get some data from it

3) Auth with server

4) Share gathered data with the server

5) Get confirmation that data has been persisted there

As a proof I made a small demo video that can be checked

Server Side

GitHub link refers to the current state of server part for miBand connector. Since Spring Cloud is chosen for solving all my needs on server side. This architecture was assembled to serve it:

Data collector architecture

A few words about the diagram. The server-side intensively uses docker-compose with dev mode. It points on certain configuration block that includes ports, DB passwords and so on. Services can be divided on three main groups:

1) Technical services (Config, Gateway, Eureka, Auth, RabbitMQ) are responsible for creation of environment where other services can run and communicate with each other properly.

2) Business-related services (Account, Device) take care of appropriate requests that come from end users through gateway. They actively communicate with their database instances and auth service. It happens this way because of security reasons: each service must verify the auth level of an end user that has just made a request. Once it's verified the service can continue processing requests according to already written logic inside of it.

3) Database instances are self-dedicated entities on a scheme that has only one link with certain service by default. It does not mean any access from outer world is prohibited. For example, we would like to check stored data inside there in dev mode. Still it stays possible because of our dev mode which includes port set for each of them. In prod mode such possibilities will be restricted completely more likely.

Since we have more-less clear picture of what is happening on diagram, our next step will be a project setup, including establishing of docker compose environment.

Environment Setup

First of all, some additional tools must be installed. Below you can find what I was using during my work on the project:

- OS: Ubuntu 19

- IDE: Intellij Idea 19

- IDE plugin: Docker plugin

- Java: openJDK 11

- Maven: v3

- Docker: v19.03.5

There is also a strong recommendation to have around 5-10 gb of free space for keeping of images and containers produced by docker machine. Also keep in mind that memory is another one valuable resource here. It will be used extremely frequent and in big sizes by project's micro-services.

Project Setup

Let's assume our dev machine has been prepared; Docker, Java, Maven and IDE are installed and we can try and create our project. Mostly all services in our "zoo" will be a Java application. Other part relates to DB setup. Its scripts will be stored in a separate folder. Our objective here is to prepare Spring Boot projects and configure them to use correspond configuration, which defines their purpose. Affected services are:

1) config service

2) registry service

3) gateway-point service

4) auth-service

5) account-service

A final project view will look like below:

Currently the project has only one place to keep and configure environment variables. I am talking about .env file. Docker-compose always reads environment variables from there. Those vars include important data such as: ports, passwords, additional options to run services with enabled debug mode and so on.

Travis CI helps to verify latest changes were made on git repo by running of unit tests. For now it compiles sources, runs tests and sends status reports to codecov service. In the end travis and codecov inform about latest project's build status and current test coverage of all modules with java sources inside.

Micro-services support Java 11. This time OpenJdk edition was chosen to avoid any possible license restrictions. Since Java 11 has a new version of garbage collector I was wonder how a new vision of object's clean up works in practice. Does it bring some additional performance or not really. Honestly this topic deserves a separate article to be written. Hope I will reach this point someday and write my investigations.

Please notice that each module, except mongodb, has its own pom.xml file. Since services can have their own additional libraries, I decided to keep them separated. Also you could notice Docker file exists in every module folder as well. They will be used by docker compose directly when a question to run the server will appear. You could find there main pom.xml file. For now we can focus on its module's tag:

XML
 




xxxxxxxxxx

1
10



1

<!-- ... --->

2

<modules>

3

 <module>config</module>

4

 <module>registry</module>

5

 <module>gateway-point</module>

6

 <module>auth-service</module>

7

 <module>account-service</module>

8

 <module>device-service</module>

9

</modules>

10

<!-- ... --->



Assuming, that one more service must be added by some demand you need to add one more record in that list.

Config service setup

Config service is horizontally scalable centralized configuration service for distributed systems.

Main purpose: keep and share configurations among all services the server has during start. Configs are shared by Native profile (that can be changed at any moment). Config sharing happens once all containers are setup and have begun their init procedure.

Each service in system which will be run essentially must keep in its sources a Bootstrap.yml file. It consists of network address including port. For example we can take bootstrap file from account service.

YAML
 




x
16



1

spring:

2

 main:

3

   allow-bean-definition-overridingtrue

4

 application:

5

   nameaccount-service

6

 cloud:

7

   config:

8

     urihttp://config:${CONFIG_SERVICE_DEV_PORT}

9

     fail-fasttrue

10

     password${CONFIG_SERVICE_PASSWORD}

11

     usernameuser

12

     profilenative

13

     retry:

14

       max-attempts10

15

 profiles:

16

   activenative



Profile was used "Native" instead of default. It was set also in main config file of configuration service:

YAML
 




xxxxxxxxxx

1
14



1

spring:

2

 cloud:

3

   config:

4

     server:

5

       native:

6

         search-locationsclasspath:/shared

7

 profiles:

8

   activenative

9

 security:

10

   user:

11

     password${CONFIG_SERVICE_PASSWORD}

12


13

server:

14

 port${CONFIG_SERVICE_DEV_PORT}



Search location points on 'shared' folder where config service keeps major configuration for all more-less valuable services. For instance RabbitMq and MongoDB instances do not require any communication with config service at all.

Spring Boot application class has just one valuable annotation:

Java
 




xxxxxxxxxx

1
14



1

package com.spayker.config;

2


3

import org.springframework.boot.SpringApplication;

4

import org.springframework.boot.autoconfigure.SpringBootApplication;

5

import org.springframework.cloud.config.server.EnableConfigServer;

6


7

@SpringBootApplication

8

@EnableConfigServer

9

public class ConfigApplication {

10


11

 public static void main(String[] args) {

12

  SpringApplication.run(ConfigApplication.class, args);

13

 }

14

}



@EnableConfigServer gives config role to our current module. One more class also must be declared here:

Java
 




xxxxxxxxxx

1
21



1

package com.spayker.config;

2


3

import org.springframework.context.annotation.Configuration;

4

import org.springframework.security.config.annotation.web.builders.HttpSecurity;

5

import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

6


7

@Configuration

8

public class SecurityConfig extends WebSecurityConfigurerAdapter {

9


10

   @Override

11

   protected void configure(HttpSecurity http) throws Exception {

12

       http.csrf().disable();

13

       http

14

          .authorizeRequests()

15

              .antMatchers("/actuator/**").permitAll()

16

              .anyRequest().authenticated()

17

          .and()

18

              .httpBasic();

19

  }

20


21

}



Security config allows to send requests on "/actuator/" point without any auth but other points are restricted and only internal services can have access to them since they all contain auth data in bootstrap files.

Registry service setup

Netflix Eureka service implements "Service discovery" architecture pattern. It allows to auto detect service instances, which could have dynamically assigned addresses because of auto-scaling, failures and upgrades.

Once server has run, Eureka will register services and provide meta-data including host, port, health indicator URL, home page etc. Eureka receives heartbeat messages from each instance belonging to a service. If the heartbeat fails over a configurable timetable, the instance will be removed from the registry.

Also, Eureka provides a simple interface for tracking of running services and quantity of available instances: http://localhost:8761

To mark some service as an Eureka instance main SpringBoot class must contain @EnableEurekaServer annotation. Its Bootstrap.yml file must have couple additional records:

YAML
 




xxxxxxxxxx

1



1

eureka:

2

 client:

3

   registerWithEurekafalse

4

   fetchRegistryfalse



Eureka won't register itself by its own functionality and is not going.

Gateway-point service setup

Gateway service will process all incoming requests in system and path them through pre-defined routes. Netflix Zuul was chosen to play role as a gateway point. Basically three main routers will be set:

1) route to auth service

2) route to account service

3) route to device service

Following to route's description config file looks like:

YAML
 




x
24



1

zuul:

2

 ignoredServices'*'

3

 host:

4

   connect-timeout-millis320000

5

   socket-timeout-millis320000

6


7

 routes:

8

   auth-service:

9

     path/mservicet/**

10

     urlhttp://auth-service:${AUTH_SERVICE_DEV_PORT}

11

     stripPrefixfalse

12

     sensitiveHeaders:

13


14

   account-service:

15

     path/accounts/**

16

     serviceIdaccount-service

17

     stripPrefixfalse

18

     sensitiveHeaders:

19


20

   device-service:

21

     path/devices/**

22

     serviceIddevice-service

23

     stripPrefixfalse

24

     sensitiveHeaders:



So API gateway is a single entry point for all clients we can have. @EnableZuulProxy is used to say SpringBoot that our gateway module will implement zuul implementation of a gateway point.

Auth service setup

Authorization service takes responsibilities on definition of rights an end user can take and use them to get allowed server API. Provided security level is based on OAuth2 which focuses on client developer simplicity while providing specific authorization flows for web applications, desktop applications, mobile phones, and living room devices.

Its main configuration locates at OAuth2AuthorizationConfig class:

Java
 




xxxxxxxxxx
1
50


 
1
@Configuration
2
@EnableAuthorizationServer
3
public class OAuth2AuthorizationConfig extends AuthorizationServerConfigurerAdapter {
4
 
          
5
    private TokenStore tokenStore = new InMemoryTokenStore();
6
 
          
7
    @Autowired
8
    @Qualifier("authenticationManagerBean")
9
    private AuthenticationManager authenticationManager;
10
 
          
11
    @Autowired
12
    private MongoUserDetailsService userDetailsService;
13
 
          
14
    @Autowired
15
    private Environment env;
16
 
          
17
    @Override
18
    public void configure(ClientDetailsServiceConfigurer clients) throws Exception {
19
        clients.inMemory()
20
                .withClient("browser")
21
                .authorizedGrantTypes("refresh_token", "password","client_credentials")
22
                .scopes("ui")
23
                .and()
24
                .withClient("account-service")
25
                .secret(env.getProperty("ACCOUNT_SERVICE_PASSWORD"))
26
                .authorizedGrantTypes("client_credentials", "refresh_token")
27
                .scopes("server")
28
                .and()
29
                .withClient("device-service")
30
                .secret(env.getProperty("DEVICE_SERVICE_PASSWORD"))
31
                .authorizedGrantTypes("client_credentials", "refresh_token")
32
                .scopes("server");
33
    }
34
 
          
35
    @Override
36
    public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception {
37
        endpoints.tokenStore(tokenStore)
38
                 .authenticationManager(authenticationManager)
39
                 .userDetailsService(userDetailsService);
40
    }
41
 
          
42
    @Override
43
    public void configure(AuthorizationServerSecurityConfigurer oauthServer) throws Exception {
44
        oauthServer
45
                .tokenKeyAccess("permitAll()")
46
                .checkTokenAccess("isAuthenticated()")
47
                .passwordEncoder(NoOpPasswordEncoder.getInstance());
48
    }
49
 
          
50
}



configure(ClientDetailsServiceConfigurer clients) method contains main security scopes, grant types and list of clients that can communicate with our infrastructure in general. "browser" type is used to get a token. It's used by our rest API on client side for the first time when it wants to pass SignUp procedure and get a token by username and password. Account and device services use "server" scope for inter-service communication and inside of micro-service environment. To make it happen FeignClient is used but more details about this matter will be left below.

Account Service Setup

Contains business logic of the server that focused on account management. Besides CRUD of operations on Account entity it actively chats with Auth service when a SignUp scenario has begun for new users. On the other hand Device service uses Account rest API to verify user that was put into json body. It happens for new device registration and further data transferring.

Rest controller is provided by Spring and contains couple end-points to create new accounts and get their info by name.

Java
 




xxxxxxxxxx
1
32


 
1
package com.spayker.account.controller;
2
 
          
3
import com.spayker.account.domain.Account;
4
import com.spayker.account.domain.User;
5
import com.spayker.account.service.AccountService;
6
import org.springframework.beans.factory.annotation.Autowired;
7
import org.springframework.security.access.prepost.PreAuthorize;
8
import org.springframework.web.bind.annotation.RestController;
9
import org.springframework.web.bind.annotation.RequestMapping;
10
import org.springframework.web.bind.annotation.RequestMethod;
11
import org.springframework.web.bind.annotation.RequestBody;
12
import org.springframework.web.bind.annotation.PathVariable;
13
 
          
14
import javax.validation.Valid;
15
 
          
16
@RestController
17
public class AccountController {
18
 
          
19
    @Autowired
20
    private AccountService accountService;
21
 
          
22
    @PreAuthorize("#oauth2.hasScope('server')")
23
    @RequestMapping(path = "/{name}", method = RequestMethod.GET)
24
    public Account getAccountByName(@PathVariable String name) {
25
        return accountService.findByName(name);
26
    }
27
 
          
28
    @RequestMapping(path = "/", method = RequestMethod.POST)
29
    public Account createNewAccount(@Valid @RequestBody User user) {
30
        return accountService.create(user);
31
    }
32
}



"getAccountByName(...)" method is marked with @PreAuthorize annotation which has 'server' scope. It means that some other service will use this API by FeignClient. In our case Device service will be a consumer of that API.

Device service setup

It serves to keep all related to smart watches/bands data. React-native client can send requests to:

- register a new device by username

- save/update information by a certain smart band (HR, steps, etc)

- get current information in terms of device Id and username

Its rest controller looks pretty similar to what we have seen already in Account service.

Java
 




xxxxxxxxxx
1
34


 
1
package com.spayker.device.controller;
2
 
          
3
import com.spayker.device.domain.Device;
4
import com.spayker.device.service.DeviceService;
5
import org.springframework.beans.factory.annotation.Autowired;
6
import org.springframework.web.bind.annotation.RestController;
7
import org.springframework.web.bind.annotation.RequestMapping;
8
import org.springframework.web.bind.annotation.RequestMethod;
9
import org.springframework.web.bind.annotation.PathVariable;
10
import org.springframework.web.bind.annotation.RequestBody;
11
 
          
12
import javax.validation.Valid;
13
 
          
14
@RestController
15
public class DeviceController {
16
 
          
17
    @Autowired
18
    private DeviceService deviceService;
19
 
          
20
    @RequestMapping(path = "/{deviceId}", method = RequestMethod.GET)
21
    public Device getDeviceById(@PathVariable String deviceId) {
22
        return deviceService.findByDeviceId(deviceId);
23
    }
24
 
          
25
    @RequestMapping(path = "/", method = RequestMethod.PUT)
26
    public void updateDeviceData(@Valid @RequestBody Device device) {
27
        deviceService.saveChanges(device);
28
    }
29
 
          
30
    @RequestMapping(path = "/", method = RequestMethod.POST)
31
    public Device createNewDevice(@Valid @RequestBody Device device) {
32
        return deviceService.create(device);
33
    }
34
}



To start using Account service API an appropriate interface was created (based on @FeignClient annotation)

Java
 




xxxxxxxxxx
1
15


 
1
package com.spayker.device.client;
2
 
          
3
import org.springframework.cloud.openfeign.FeignClient;
4
import org.springframework.http.MediaType;
5
import org.springframework.web.bind.annotation.PathVariable;
6
import org.springframework.web.bind.annotation.RequestMapping;
7
import org.springframework.web.bind.annotation.RequestMethod;
8
 
          
9
@FeignClient(name = "account-service")
10
public interface AccountServiceClient {
11
 
          
12
    @RequestMapping(method = RequestMethod.GET, value = "/accounts/{name}", consumes = MediaType.APPLICATION_JSON_UTF8_VALUE)
13
    String getAccountByName(@PathVariable("name") String name);
14
 
          
15
}



Pay attention on equality of method names. Account rest controller and feign interface must have method names identical.

Few comments about databases

Talking about ways to store data on server I would start from requirements. In terms of PoC I would like to see some easy in use database which does not require too much time for setup, data modelling and maintaining. Relational approach is not really comfortable to use right now. Data flow is going to change itself quite frequent and time by time significantly.

That is why I put my attention on documented RDBMS. Such storages does not require data modelling for fast concept's testing; presents data in a json format that can be useful during REST testing. MongoDB looked quite promising for my needs. It has horizontal scaling, support of transactions, official images on docker hub.

To make MongoDB usage possible I had to create a docker file that will create an image for future containers.

YAML
 




xxxxxxxxxx
1
13


 
1
FROM mongo:3
2
 
          
3
ADD init.sh /init.sh
4
ADD ./dump /
5
 
          
6
RUN \
7
 chmod +x /init.sh && \
8
 apt-get update && apt-get dist-upgrade -y && \
9
 apt-get install psmisc -y -q && \
10
 apt-get autoremove -y && apt-get clean && \
11
 rm -rf /var/cache/* && rm -rf /var/lib/apt/lists/*
12
 
          
13
ENTRYPOINT ["/init.sh"]



In addition init.sh script file was left to generate tech users in already created container and run there dump scripts.

Shell
 




xxxxxxxxxx
1
34


 
1
#!/bin/bash
2
if test -z "$MONGODB_PASSWORD"; then
3
    echo "MONGODB_PASSWORD not defined"
4
    exit 1
5
fi
6
 
          
7
auth="-u user -p $MONGODB_PASSWORD"
8
 
          
9
# MONGODB USER CREATION
10
(
11
echo "setup mongodb auth"
12
create_user="if (!db.getUser('user')) { db.createUser({ user: 'user', pwd: '$MONGODB_PASSWORD', roles: [ {role:'readWrite', db:'mservicet'} ]}) }"
13
until mongo mservicet --eval "$create_user" || mongo mservicet $auth --eval "$create_user"; do sleep 5; done
14
killall mongod
15
sleep 1
16
killall -9 mongod
17
) &
18
 
          
19
# INIT DUMP EXECUTION
20
(
21
if test -n "$INIT_DUMP"; then
22
    echo "execute dump file"
23
    until mongo mservicet $auth $INIT_DUMP; do sleep 5; done
24
fi
25
) &
26
 
          
27
echo "start mongodb without auth"
28
chown -R mongodb /data/db
29
gosu mongodb mongod "$@"
30
 
          
31
echo "restarting with auth on"
32
sleep 5
33
exec gosu mongodb /usr/local/bin/docker-entrypoint.sh --auth "$@"
34
 
          



Account and Device services have their own dump scripts. For now they simply creates a collection and fills it with some demo entities.

JavaScript
 




xxxxxxxxxx
1
18


 
1
/**
2
 * Creates pre-filled demo account
3
 */
4
 
          
5
print('dump start');
6
 
          
7
db.accounts.update(
8
    { "_id": "demo" },
9
    {
10
        "_id": "demo",
11
        "lastSeen": new Date(),
12
        "note": "demo note",
13
        "data": []
14
    },
15
    { upsert: true }
16
);
17
 
          
18
print('dump complete');



Auth service has its own mongo database to keep there user related info that can be needed during token generation.

Server Launch

How to run a react-native client we have already known from previous chapters. Server side can be built in a different way:

1) Compile sources staying in project root folder by typing in terminal:

Shell
 




xxxxxxxxxx
1


 
1
mvn clean install



2) Run unit tests (runs automatically before service artifacts will be built)

3) Generate jar files of services in target folders (happens automatically after test phase is finished). If all things go right you will next report:

4) Run docker-compose script by typing:

Shell
 




x


 
1
sudo docker-compose -f ./docker-compose.yml -f ./docker-compose.dev.yml up



Coming back to IDE, which was chosen to be used in the project, couple tweaks were prepared by me. They simply make life working with micro-services easier.

Firstly you can enable Docker plugin (settings -> plugins -> Docker) and get almost full control on current images, containers deployed on docker machine.

Using this plugin your life in development, debugging and maintaining of micro-services becomes significantly easier. It happens so, due to well extended UI which is able to show current logs of each running service, its opened ports, declared system variables and so on. In addition I have prepared and kept IDE run configuration in 'resources/idea/run/docker' folder. You can import them into your Idea and begin running containers immediately.

Debugging Services

Debugging of services becomes a real challenge. During active stage of development to run again and again whole micro-service world becomes time consuming. Especially when your changes in some certain service are minor. For such cases I added debug ports for dev mode. They allow to establish a debug connection with a target service and check what is going on there on fly. Besides it supports "hot deploy" feature by default. All you need is:

1) import run configurations from "resources/idea/run/remote" folder

2) run a correspond config that relates to your target resource

3) start modifying your sources

4) recompile modified source file ( Ctrl + Shift + F9 )

5) wait until class(es) are compiled and loaded on service side

6) verify uploaded changes by activated breakpoints

Conclusion

By default a server side solution was designed as a PoC. A smart eye will find there minor things to be solved or improved. However it satisfies major demands were set in the beginning as follows:

1) receive an auth request and handle it

2) send back an user token to get access for main rest end points on server

3) handle and persist data from a mobile application finally

4) sends back collected data by demand

Taking this into consideration I will continue polishing and extending its capabilities. Next part will finally cover questions regarding IoS and MiBand 4 support. Do not hesitate to ask me about project structure, its sources. Chat rooms were created on Gitter platform. You can find links on GitHub pages of my projects. Take care! :)

Links

Miband 3 connector on react-native

Server part for connector

Topics:
cloud ,docker ,docker-compose ,java ,mongo db ,react-native ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}