Cloud in 2017: Opportunities and Challenges for Data and Apps
The cloud has momentum, but have you evaluated the advantages and challenges of migrating your current applications to the cloud?
Join the DZone community and get the full member experience.Join For Free
2017 is going to be the year of the cloud, as was 2016, and most likely will be for 2018. The momentum toward the cloud is real, and enterprises, both small and large, are making large investments to move their applications and IT infrastructure to the cloud.
But with the cloud comes many challenges for IT administrators and architecture teams. Many individuals find themselves asking several questions, but let’s categorize them into two main buckets.
- How can I take advantage of what the cloud has to offer in terms of cost savings and efficiencies?
- How can I re-architect my current applications to leverage the cloud’s promise of flexibility and agility? Essentially, the question is how can organizations follow in the footsteps of Netflix, Dropbox, Uber, and many others in building distributed, cloud-native, cloud-scale applications.
One of the main selling points of the cloud is storage, compute, and networking as fundamental offerings. Add to that the ability to elastically expand each of these services, and you can see why organizations all over the world are excited about the opportunity to build their applications in the cloud. The bad news, however, is that cloud-based applications and third platform applications are not set up today to make continuous protection/backup copies across the database cluster due to their distributed architectures. Cloud database vendors have mostly been focused on solving database centric problems, but have happily ignored the hard problems of data protection, such as cloud-scale backup and recovery. What’s more, production applications can’t be stopped or slowed down in any way during backup procedures.
But, not all is lost. Based on our experiences and conversations with customers leveraging the cloud, we have identified how you can set your applications up for success. Read below for our three key takeaways.
Cloud storage such as Amazon S3 is always available, and with different zones, you really don’t need to worry about availability. The challenge you need to think about, however, is: “Are relying too much on just storage replication?”
In other words, “Does your application need logical replication?” If you are in retail, an example could be an e-commerce application, where you have multiple objects, such as a product catalog, subscribers, customers, etc. You need replication on a semantic level, on an object level, and thus it helps your application not worry about disk structures, when thinking about availability. Logical replication can make it easier for developers to develop distributed applications and also may allow for use cases such as search and analytics for logical replicas.
The recent AWS meltdown, which threw several applications offline, brings forward the importance of availability for your application. One needs to consider multi-cloud architectures to avoid disruptions such as these.
In addition to elasticity and flexibility, one of the other capabilities of the cloud is its ability to expose services, instead of monolithic data stores. What this means is that applications in the Cloud can leverage multiple services, for example, for a Netflix-like application your Single-Sign On (SSO) may be managed by Okta, and your documents/media files may be managed in a separate “media” microservice.
Thus, different services in the cloud may be managed in different data stores, and these individual data stores may be distributed geographically in separate zones (like Apache Cassandra, or Amazon Aurora). In this new world, the ability to have a solution which maintains a coherent state of all of your application data across all disparate microservices becomes increasingly important.
Agility for Your Data
Cloud offers the ability for self-service, and with the cloud, can analyze data within seconds or minutes after it is recorded on the primary data store.
The same is true for the ability to recover from an application outage. In the world of 24*7 availability, cloud requires fast recovery from outages. If your application needs to be restored to a previous consistent logical state within minutes, one can not rely on disparate data stores, and one needs a data management solution where this logical state is maintained. The alternative is relying on hours of work by DevOps to recover your application.
In addition to fast analytics and fast recovery, one also needs to think about data mobility so that it is portable between different stores, and can be seamlessly exchanged between different applications.
As organizations think about the cloud, as well as data from applications, they need to think out-of-the-box, which is equivalent to a paradigm shift when it comes to recoverability in the cloud.
Published at DZone with permission of Jeannie Liou, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Which Is Better for IoT: Azure RTOS or FreeRTOS?
Replacing Apache Hive, Elasticsearch, and PostgreSQL With Apache Doris
An Overview of Kubernetes Security Projects at KubeCon Europe 2023
Never Use Credentials in a CI/CD Pipeline Again