Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

New Computing Architectures to Deliver Vastly Better Compute Performance

DZone's Guide to

New Computing Architectures to Deliver Vastly Better Compute Performance

The best IT infrastructure performance architecture is one where an app's active data is located on or as close to CPUs as possible.

· Performance Zone
Free Resource

Having consistently fast access and easy communication from any device, from anywhere is now the default assumption of most everyone. The keen interest in the immense value hidden deep within huge quantities of data is no longer confined to analysts — it is now a key focus of almost every organization. And apprehension to meet these web-scale requirements has reached the C-suite. Consequently, such expectations, interests, and anxieties have placed enormous pressure on IT to develop and manage a software and hardware infrastructure that can deliver what companies must have in order to respond to today’s customer demands and address today’s competitive pressures. 

Enter Intel’s 3D-xPoint memory and Google/Rackspace’s Zaius P9 Server technologies. These new computing architectures are designed to solve web-scale challenges. But to use them effectively, applications and compute services will have to be modified to take advantage of the non-volatile, in-memory computing benefits offered by these innovations.

While getting there will take a while, new technologies leveraging in-memory compute are helping bridge the gap. Choosing ones that do not require modifications to application or service code will support simple integration with 3D-xPoint and Zaius once they are more available.

To be clear, in-memory computing is nothing new. The use of DRAM to increase workload performance of databases, applications, and other services has been part of computing architectures since the era of mainframes from decades ago. In-memory caching techniques have been used to accelerate storage I/O for the past 25 years. Properly leveraged, in-memory computing results in applications and services that not only perform better but they also cost less to scale.  However, obtaining these benefits has been difficult and expensive, and in many cases, the data processed in-memory is very vulnerable to DRAM disruptions.

New offerings address those challenges. When choosing a solution, look for one that allows you to easily balance cost-effective in-memory computing with data-safety and efficacy within the cloud and other virtualized computing environments. 

The solution you choose should support replication to protect data from potential component failure. It’s also important to choose solutions that can claim and hold DRAM independently of other EC2 instances running on the same physical server, so there are no shared contentions for DRAM resources. And to address storage and network I/O performance, make sure the optimization tool you choose can work in close cooperation with AWS’s integrated hypervisors.  Leveraging this integration provides the benefits of the hypervisors’ caching that synchronously replicates active data across virtual machines, ensuring data consistency. 

At the end of the day, delivering the highest IT infrastructure productivity should be simple, cost-effective, and worry-free. The very best IT infrastructure performance architecture is one where an application’s active data is located on or as close to CPUs as possible — and a safe, in-memory approach realizes the most optimal IOPs for virtualized computing use-cases.

See my recent DZone post on Reaping In-Memory Computing Benefits on AWS to learn more about significantly improving the performance of your EC2 deployments and increasing the ROI you achieve from AWS.

Topics:
in-memory computing ,aws ,ec2 ,performance

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}