Transient Clusters in the Cloud for Big Data
When it comes to getting value from big data, paying less and processing it faster to reduce time-to-insight are always top-of-mind goals.
Join the DZone community and get the full member experience.
Join For FreeCheaper, faster. Faster, cheaper. When it comes to getting value from Big Data, paying less and processing it faster to reduce time-to-insight are always top-of-mind goals. To achieve these goals, many enterprises are turning to the cloud to augment their on-premise Hadoop infrastructure or replace it.
Pay Only for What You Need
One key reason for the shift is that Hadoop in the cloud allows for the decoupling of storage and compute services, so enterprises can pay for storage at a lower rate than for computing services. Also, the cloud provides the unlimited scalability that on-premise architecture can’t. With cloud services like AWS EMR or Microsoft Azure HD Insight, enterprises can spin up and scale Hadoop clusters on demand. Have a job that isn’t processing fast enough? Add more nodes and then scale back down when it’s done. Have several jobs of various sizes? Run multiple clusters of exactly the size needed so that no resources are wasted. Add transient clusters to the mix, and the cloud becomes an extremely customizable Big Data solution.
Leverage Transient Clusters
Transient clusters are compute clusters that automatically shut down and stop billing when processing is finished. However, using this cost-effective approach has been an issue in the past, as metadata is automatically deleted by the cloud provider when a transient cluster is shut down. Therefore, most enterprises have opted to pay for persistent compute across the board in order to maintain the metadata.
Now with a data management platform like Bedrock, enterprises can leverage transient clusters for cost-savings and maintain their metadata. How does it work? In Bedrock’s case, the data management platform monitors the ingestion of the data that’s being loaded to the transient cluster in the cloud and stores the resulting metadata outside EMR/HD Insight. That way, the metadata is still available after the cluster is terminated.
Why is this important? Metadata is the key to getting value from Big Data. It’s the technical, operational and business information about the data that allows users to find the data they need in the data lake, verify its quality and trust the validity of their analyses and business intelligence.
A Hybrid Approach
Moving storage and applications to the cloud isn’t an all-or-nothing proposition. In reality, most enterprises are employing a hybrid approach to the data lake, with some data storage—perhaps of less sensitive, third-party data—and processing—including transient clusters—in the cloud, and some on-premise. An intelligent Hadoop data lake management platform, like Bedrock, is flexible and provides a centralized way to manage on-premise and cloud-based computing across the enterprise.
Published at DZone with permission of Scott Gidley. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments