Welcome To The NVMe Place
The NVMe Place
Welcome to the NVM Express (NVMe) place containing various links and content on you guessed it, NVMe which is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.
Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.
Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5" drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.
NVMe as a "back-end" I/O interface for NVM storage media
NVMe as a “front-end” interface for servers or storage systems/appliances
NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE(RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.
Main features of NVMe include among others:
- Lower latency due to improve drivers and increased queues (and queue sizes)
- Lower CPU used to handler larger number of I/Os (more CPU available for useful work)
- Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
- Bandwidth improvements leveraging various fast PCIe interface and available lanes
- Dual-pathing of devices like what is available with dual-path SAS devices
- Unlock the value of more cores per processor socket and software threads (productivity)
- Various packaging options, deployment scenarios and configuration options
- Appears as a standard storage device on most operating systems
- Plug-play with in-box drivers on many popular operating systems and hypervisors
NVMe and shared PCIe (e.g. shared PCIe flash DAS)
NVMe Related Content and Links
The following are some of my tips, articles, blog posts, presentations and other content on NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.
- NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
- Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
- Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
- PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
- NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
- What should I consider when using SSD cloud? (Via SearchCloudStorage)
- MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
- Selecting Storage: Start With Requirements (Via NetworkComputing)
- PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
- Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
- Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
- How many IOPS can a HDD, HHDD or SSD do (Part I)?
- How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
- I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.
The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.
If you are in to the real bits and bytes details such as at device driver levels, check out the Linux NVMe reflectorforum Learn more about NVMe at the NVM Express Organization site here including news, products, events updates and other resources.
Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here. Watch for updates with more content and links to be added here soon.
Ok, nuff said (for now)
All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2015 Server StorageIO(R) and UnlimitedIO All Rights Reserved