Advancing Application Performance with NVMe Storage, Part 1
NVMe storage computing can be used to provide AI and ML computing with the high performance they require.
Join the DZone community and get the full member experience.Join For Free
With big data on the rise and data algorithms advancing, the ways in which technology has been applied to real-world challenges have grown more automated and autonomous. This has given rise to a completely new set of computing workloads for Machine Learning which drives Artificial Intelligence applications.
AI/ML can be applied across a broad spectrum of applications and industries. Financial analysis with real-time analytics is used for predicting investments and drives the FinTech industry's needs for high-performance computing. Real-time image recognition is a key enabler for self-driving vehicles, while facial recognition is used by law enforcement across the globe. Manufacturing uses image recognition technology to spot defects in materials, organizations such as NOAA use satellite imagery to spot changes in weather, while social media platforms use image recognition to tag photos of friends and family.
What is common among these uses cases is the need for a high level of parallel computing power, coupled with a high-performance low latency architecture to enable parallel processing of data in real-time across the compute cluster. The "training" phase of machine learning is critical and can take an excessively long time, especially as the training data set grows exponentially to enable deep learning for AI.
With storage performance now recognized as a critical component of AI/ML application performance, the next step is to identify the ideal storage platform. Non-Volatile Memory Express (NVMe) based storage systems have gained traction as the storage media of choice to deliver the best throughput and latency. Shared NVMe storage systems unlock the performance of NVMe, and offer a strong alternative to using local NVMe SSDs inside of GPU nodes.
The Rise of GPUs for AI/ML
GPUs were originally created for high-performance image creation, and are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them much more efficient than general purpose CPUs for algorithms where the processing of large blocks is done in parallel. For this reason, GPUs have found strong adoption in the AI/ML use case as they allow for a high degree of parallel computing and current AI-focused applications have been optimized to run on GPU-based computing clusters.
With the powerful compute performance of GPUs, the bottleneck moves to other areas of the AI/ML architecture. For example, the volume of data required to feed machine learning requires massive parallel read access to shared files from the storage subsystem across all nodes in the GPU cluster. This creates a performance challenge that NVMe shared storage systems are ideally suited to address.
Shared NVMe Storage for High-Performance Machine Learning (ML)
One of benefits of shared NVMe storage is the ability to create even deeper neural networks due to the inherent high performance of shared storage, opening the door for future models that cannot be achieved today with non-shared NVMe storage solutions.
Today, there are storage solutions that offer patented architectures built from the ground up to leverage NVMe. The key to performance and scalability is the separation of control and data path operations between the storage controller software and the host-side agents. The storage controller software provides centralized control and management, while the agents manage data path operations with direct access to shared storage volumes.
While AI/ML workloads are run exclusively on the GPUs within the cluster, that doesn't mean that CPUs have been eliminated from the GPU clusters completely. The operating system and drivers still leverage the CPUs, but while the machine learning training is in progress, the CPU is relatively idle. This provides the perfect opportunity for an NVMe-based storage architecture to leverage the idle CPU computing capacity for a high-performance distributed storage approach.
With NVMe protocol supporting exponentially more connections per SSD, the storage agents use RDMA to give each GPU node a direct connection to the drives. This approach enables the agents to perform up to 90% of the data path operations between the GPU nodes and storage, reducing latency to be on par with local SSDs.
In this scenario, running the NVMe-based storage agent on the idle CPU cores of the GPU nodes enables the NVMe-based storage to deliver 10x better performance than competing all-flash solutions, while leveraging existing compute resources that are already installed and available to use.
Published at DZone with permission of Zivan Ori. See the original article here.
Opinions expressed by DZone contributors are their own.