Cloud Deployment Process for Internet of Vehicles: IoV Series (III)
This third part of the IoV series focuses on how to deploy and configure your application in the cloud.
Join the DZone community and get the full member experience.Join For Free
We have discussed the application architecture and service selection on the cloud. This chapter summarizes the process of cloud migration. You will have to go into more details during the migration, we will omit some details to focus on the main procedures. The following figure shows the general migration process:
The migration to Alibaba Cloud involves the following stages: database configuration and data migration, deployment of basic services such as the Dubbo service, MQ service, and storage service, application deployment and configuration, functional testing and integration testing, and flow cutting and security reinforcement.
Database Preparation and Configuration
As discussed in our previous articles, you have to migrate your MySQL database to an Alibaba Cloud ApsaraDB for RDS instance. In case particularly high performance is required (for example, a single physical server that is highly configured cannot meet the performance requirements), migrate your database to an Alibaba Cloud DRDS instance.
For security and stability purposes, add the IP addresses or IP address segment for accessing the database to the whitelist of the target instance before you use the RDS instance. We recommend that you maintain the whitelist periodically because correct use of the whitelist improves access security for your RDS instance.
After you create the database and account, configure the migration task. Use the same method to migrate offline data to the cloud database with Data Transmission Service.
Data Transmission supports data migration between Redis instances. If the source instance is a user-created Redis instance, Redis migration supports incremental data synchronization, which enables smooth Redis data migration without stopping local application services.
MongoDB clusters are migrated to ApsaraDB for HBase clusters. HBase supports a lot of scenarios and can be selected based on the business model. The ratio depends on the service QPS, storage capacity, read and write requests, delays, and stability. You can choose the SSD cloud disk, ultra cloud disk, exclusive specifications, general specifications, and 4 CPU 8G to 32 CPU 128G for your ApsaraDB for HBase cluster.
The master node does not provide storage and uses primary and backup protection for single-point disaster tolerance by default. The SSD cloud disk is more stable than the ultra cloud disk and has better performance for reading, especially in the random read.
For a large data volume and common response delay, select the common 4 CPU 16G disk and ultra cloud disk, where you can mount a lot of disks. If you require a low response delay, select the exclusive 8 CPU 32G or 16 CPU 64G model, and use the SSD cloud disk. If you do not have high requirements for QPS (such as 10,000 to a million QPS), select the 4 CPU 8G or 4 CPU 16G model. If you require excellent reading performance, select the 1:4 memory model, generally with 1:2 reading.
For Elasticsearch, we strongly recommend that you configure monitoring and alarms for the following parameters:
- Cluster status (The cluster status indicator is green or red)
- Node disk usage (%) (The alarm threshold must be less than 75% and cannot exceed 80%.)
- Node HeapMemory usage (%) (The alarm threshold must be lower than 85% and cannot exceed 90%.)
- Node CPU usage (%) (The alarm threshold cannot exceed 95%)
- Node load_1m (The reference value is 80% of the number of CPU cores.
- Cluster query QPS (count/second)
- Cluster write QPS (count/second)
Basic Service Preparation and Configuration
Object Storage Service (OSS)
Because the traditional user-created NFS has poor reliability, high cost, and extended downtime, we use OSS on the cloud.
Message Queue (MQ)
Traditional user-created open source MQ Kafka usually has many defects, such as bugs in different versions, lack of official technical support, and hard-to-identify failures. In this case, we use on-cloud MQ Kafka. Different from open source MQ Kafka, on-cloud MQ Kafka provides fully-managed service, thoroughly solving the persistent pain points of open source services. You only need to focus on service development without deployment and O&M, reducing costs and improving flexibility and reliability.
Distributed Application Configuration Center
We use Alibaba Cloud Application Configuration Management (ACM) for centralized configuration. ACM is an application configuration center for centralized management and pushing of application configuration in a distributed architecture environment. ACM significantly reduces the workload of configuration management and improves service capabilities of configuration management in microservice, DevOps, and big data scenarios.
We use Alibaba Cloud MaxCompute on the cloud as an offline big data computing service. We need to create MaxCompute, initialize configuration, and import data in advance so that big data developers can quickly start big data development.
Application Deployment and Configuration
As we want to build continuous integration through Jenkins+Docker on the cloud, we use CodePipeline. Alibaba Cloud CodePipeline is an SaaS-based service with continuous integration/continuous delivery capabilities and fully compatible with Jenkins capabilities and user habits. It is out of the box without O&M needed. Fully compatible with the Jenkins plugin, it supports continuous deployment of Elastic Compute Service (ECS) and container service, enabling quick start.
Server Load Balancer Configuration
After configuring Server Load Balancer, you have to resolve the domain name to the public network service address of the Server Load Balancer instance. For example, the website domain name is www.abc.com and the website runs on the ECS instance with the public network IP address 18.104.22.168. The system allocates a network IP address 22.214.171.124 to the created Server Load Balancer instance. You have to add the ECS instances to the backend server pool of the Server Load Balancer instance and resolve the domain name www.abc.com to 126.96.36.199. We recommend that you resolve record A (that is, resolve the domain name to an IP address).
Test and Verification
After all the applications are deployed and started, you need to test and verify the applications with, for example, function test and integration test.
If all applications pass the tests (including function test, integration test, and performance test) without any problems, you can cut them over online provided that you have notified customers and staff of maintenance in advance. The system, code, files, and database are migrated during migration. During maintenance, stop writing new data and enable the read-only mode for the database to reduce the effect on users. After the database is synchronized, resolve the domain name to Alibaba Cloud, which is the last step of migration.
Although the domain name has been resolved to the new IP address, the shortest cycle of resolution record refreshing is only 10 minutes. However, we are unable to control local DNS caches on clients, which means some customers still visit the old site. For customers still visiting the IDC, we enable a 302 redirection on the front-end NGINX server of the IDC to direct the customers to Alibaba Cloud.
As the NGINX server is based on Layer-7 Server Load Balancer, you need to match it to the domain name. "server_name" of the NGINX server corresponds to the domain name configured for the redirection URL. To resolve the domain name to the Alibaba Cloud IP address, you can set the IP address to the Alibaba Cloud IP address on the host configuration page for the NGINX server. Observe for a period of time until all traffic is smoothly cut over to Alibaba Cloud. However, it is recommended that you retain old applications for a period of time in case of an emergency. In case of any problems, you can modify DNS resolution to quickly restore original services.as shown in the following figure.
Enterprises need to develop detailed rollback plans based on their business. For example, if business is important and no data errors are tolerable, it is recommended to synchronize the online database with the offline database when databases are cut over to the cloud. You can set the online database as a master database, set the offline database as a slave database, and enable master-slave synchronization to ensure data consistency. Although the operation process is complex, it is necessary to anticipate risks and measures. For important services and data, it is necessary to prepare a detailed cutover plan and rollback plan. However, for less important and influential services with a high rollback price, you need to handle the situation flexibly.
Anti-DDoS Service Pro Configuration
The data reporting address for a smart terminal is the public network address. To protect the automobile data reporting address from malicious attacks, you need to particularly protect the address. You can configure Anti-DDoS Service Pro to direct the attack traffic to a protected IP address to ensure a stable and reliable data reporting address.
After configuration on the Alibaba Cloud Security Anti-DDoS console is completed, the Anti-DDoS Service Pro instance can forward request messages passing through the Anti-DDoS Service Pro port to the corresponding origin site (real server) port. To maximize the stability of services, it is recommended that you perform a local test before fully switching services. You can locally access the backend service port of the Anti-DDoS Service Pro instance through the telnet command. If the telnet command is available, data is successfully forwarded.
Web Application Firewall (WAF) Configuration
Back-to-source IP addresses are source IP address used by the WAF instance when it serves as a proxy for clients to request data from a server. For the server, all source IP addresses become WAF back-to-source IP segments after the server is connected to the WAF instance, and real client IP addresses are added to the XFF field of the HTTP header. After the server is connected to the WAF instance, the WAF instance exists between clients and the server as a reverse proxy, and the real server IP address is hidden so that you can see the WAF instance rather than the origin site on clients, as shown in the following figure ("origin": origin site):
Before you cut over service traffic to the WAF instance, you should locally verify whether all configurations and WAF forwarding are normal. First, modify the local hosts file to ensure that request messages to protected sites pass through the WAF instance first. Modify the hosts file and save the changes. Ping the protected domain name locally. The IP address to which the domain name is resolved is expected to be the previously bound WAF IP address. If the IP address is still the source site address, you can refresh local DNS caches (you can run the ipconfig/flushdns command on the Windows cmd CLI). After confirming that the address in the hosts file takes effect (the domain name is resolved locally to the WAF instance IP address), open the browser and enter the domain name. The website can be accessed if the WAF instance is configured correctly.
Modify the DNS and resolve the website domain name to the WAF instance IP address to cut over services. After the website domain name is resolved to the WAF instance IP address, the DNS is connected to the WAF instance to be protected. After configuring the resolution record, you can ping the website domain name or other tools to check the effectiveness of DNS resolution.
Published at DZone with permission of Leona Zhang. See the original article here.
Opinions expressed by DZone contributors are their own.