Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Reaping In-Memory Computing Benefits on AWS

DZone's Guide to

Reaping In-Memory Computing Benefits on AWS

The benefits of in-memory storage are clear, and you can maximize your gains with Amazon Web Services' Machine Images, which allow custom configuration.

· Cloud Zone
Free Resource

Deploy and scale data-rich applications in minutes and with ease. Mesosphere DC/OS includes everything you need to elastically run containerized apps and data services in production.

More people are adopting in-memory technologies these days, and it’s no wonder. With soaring mountains of data, ever-expanding concurrency demands, and the well-recognized value of analytics management, many understand that in-memory achieves the fastest, most responsive compute capabilities for these tasks today. Solid state disks don’t even come close to in-memory’s performance. Consequently, companies are innovating faster than ever with in-memory solutions to deliver its valuable benefits.

But if you think in-memory technologies are too complex, too costly, and too risky for your data, think again, especially those of you who are using AWS EC2 instances. You too can safely deliver the benefits of in-memory computing, significantly increasing the performance of your AWS workloads while lowering the cost of your current cloud resources.

AWS users working with in-memory technologies inside of the AWS ecosystem already know about AWS-ElastiCache, and the various specialized database and file solutions offered as Amazon Machine Images. Such offerings provide developers the means to code in-memory capabilities into their applications. For those developers who understand how to use these technologies, they provide reasonable approaches in achieving the value of in-memory computing. However, some are not simple nor low cost.

AWS system administrators and DevOps personnel, who are often responsible for AWS budgets and costs and for achieving AWS-EC2 workload goals, are generally not permitted to alter the code of the applications they manage. So, how do they deliver the value of in-memory capabilities to their organizations?

Most often, they provision Amazon Machine Images (AMIs) that optimize EC2 instances with in-memory capabilities for the workloads they manage. They look for pre-configured EC2-AMIs that ensure cache consistency and data persistence in addition to the enhanced in-memory workload capacity, density and performance they need. To find these AMIs, search the AWS-Marketplace for “in-memory EC2.”

If DevOps and SysAdmins can engage developers to build their own in-memory EC2 AMIs for the specific application performance and availability targets they require, what do they look for? Knowledge of caching algorithms and eviction policies and in-memory options like write-through, write-around, and write-back is needed. Developers also may need to leverage the operating system’s kernel, the AWS hypervisor kernel, Elastic Block Store (EBS) SSDs, and other AWS resources to code custom caching utilities into their unique in-memory AWS AMI solutions. Although not all of the information for these tasks is in the AWS library, it is still an excellent place to start. Additionally, there are active communities of independent developers who can help with AMI development questions.

Whether developing a custom in-memory AWS AMI or using an in-memory EC2 optimized AMI from the AWS Marketplace, such solutions should have the following features beyond providing processing speed:

Simplicity: Reducing deployment complexity of an in-memory solution ensures its value is rapidly realized across a broad spectrum of needs, from applications and databases to infrastructure services. For instance, an in-memory solution that presents itself and can be used like a standard disk storage volume can be applied to many types of computing services without developing custom code. Services will recognize the in-memory storage volume and use it just as they would a volume of storage on disk.

Consistency: The aim of consistency is to minimize chances that cached data is different from its counterpart held in the data persistence layer. There will always be some difference, called latency, because DRAM cache is orders of magnitude faster than even the fastest SSD. However, good in-memory solutions minimize this risk while also minimizing its impact on scalability, availability and workload performance. Note, there are different consistency considerations for distributed workloads.

Data persistence: Some in-memory use cases do not need their data to persist beyond its RAM cache. However, if your use case requirements do, then make sure the optimized EC2 AMI you provision from the AWS-Marketplace, or build yourself, utilizes EBS-SSD. The best in-memory optimized AMIs will leverage algorithms and hybrid caching options that maximize write-concurrency and data persistence of EBS-SSDs. In addition, EBS volumes are automatically replicated within their AWS Availability Zones to protect their stored data from component failure, providing failover and high-availability options.

Single-tenant caching: Contentions for RAM can be a real problem in the virtualized world, and they will negate the benefits of an in-memory solution if not directly addressed. Solutions you provision from the AWS Marketplace, or develop yourself, should mitigate such contentions at the hypervisor level by permitting some amount of RAM to be dedicated to the in-memory solution that is being deployed.

Obviously, the easiest (and likely the least expensive) approach in delivering an in-memory solution for your AWS-EC2 deployments is to utilize pre-configured, in-memory Amazon Machine Images. These AMIs have already optimized EC2 instances with all the necessary operating system and hypervisor utilities, and they have leveraged other AWS components and services to ensure you realize the full value of in-memory computing for your AWS-EC2 workloads. But, if pre-optimized EC2 instances don’t address your unique requirements, there are options. Building your own in-memory solution is more complex and may cost more, but the investment is worth its benefits to increase processing performance for custom use-cases.

Fortunately, in-memory computing options are rapidly becoming the norm as DRAM prices continue to drop and cloud vendors, like AWS, continue to provide ever-more powerful resources that can be leveraged by pre-optimized instances and in-memory technologies.

Discover new technologies simplifying running containers and data services in production with this free eBook by O'Reilly. Courtesy of Mesosphere.

Topics:
in-memory computing ,aws ,amazon machine images ,cloud

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}