Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)
A key to improving performance is benchmarking. Read about Microsoft Diskspd's tools for storage and server benchmarking, and boost your I/O performance.
Join the DZone community and get the full member experience.Join For Free
This is part-one of a two-part post pertaining Microsoft Diskspd.that is also part of a broader series focused onserver storage I/O benchmarking, performance, capacity planning, tools and related technologies. You can view part-two of this post here, along with companion links here.
Many people use Iometer for creating synthetic (artificial) workloads to support benchmarking for testing, validation and other activities. While Iometer with its GUI is relatively easy to use and available across many operating system (OS) environments, the tool also has its limits. One of the bigger limits for Iometer is that it has become dated with little to no new development for a long time, while other tools including some new ones continue to evolve in functionality, along with extensibility. Some of these tools have optional GUI for easy of use or configuration, while others simple have extensive scripting and command parameter capabilities. Many tools are supported across different OS including physical, virtual and cloud, while others such as Microsoft Diskspd are OS specific.
Instead of focusing on Iometer and other tools as well as benchmarking techniques (we cover those elsewhere), lets focus on Microsoft Diskspd.
What is Microsoft Diskspd?
Microsoft Diskspd is a synthetic workload generation (e.g. benchmark) tool that runs on various Windows systems as an alternative to Iometer, vdbench, iozone, iorate, fio, sqlio among other tools. Diskspd is a command line tool which means it can easily be scripted to do reads and writes of various I/O size including random as well as sequential activity. Server and storage I/O can be buffered file system as well non-buffered across different types of storage and interfaces. Various performance and CPU usage information is provided to gauge the impact on a system when doing a given number of IOP’s, amount of bandwidth along with response time latency.
What can Diskspd do?
Microsoft Diskspd creates synthetic benchmark workload activity with ability to define various options to simulate different application characteristics. This includes specifying read and writes, random, sequential, IO size along with number of threads to simulate concurrent activity. Diskspd can be used for testing or validating server and storage I/O systems along with associated software, tools and components. In addition to being able to specify different workloads, Diskspd can also be told which processors to use (e.g. CPU affinity), buffering or non-buffered IO among other things.
What type of storage does Diskspd work with?
Physical and virtual storage including hard disk drive (HDD), solid state devices (SSD), solid state hybrid drives (SSHD) in various systems or solutions. Storage can be physical as well as partitions or file systems. As with any workload tool when doing writes, exercise caution to prevent accidental deletion or destruction of your data.
What information does Diskspd produce?
Diskspd provides output in text as well as XML formats. See an example of Diskspd output further down in this post.
Where to get Diskspd?
You can download your free copy of Diskspd from the Microsoft site here.
The download and installation are quick and easy, just remember to select the proper version for your Windows system and type of processor.
Another tip is to remember to set path environment variables point to where you put the Diskspd image.
Also stating what should be obvious, don’t forget that if you are going to be doing any benchmark or workload generation activity on a system where the potential for a data to be over-written or deleted, make sure you have a good backup and tested restore before you begin, if something goes wrong.
New to server storage I/O benchmarking or tools?
If you are not familiar with server storage I/O performance benchmarking or using various workload generation tools (e.g. benchmark tools), Drew Robb (@robbdrew) has a Data Storage Benchmarking Guidearticle over at Enterprise Storage Forum that provides a good framework and summary quick guide to server storage I/O benchmarking.
Data storage benchmarking can be quite esoteric in that vast complexity awaits anyone attempting to get to the heart of a particular benchmark.
Case in point: The Storage Networking Industry Association (SNIA) has developed the Emerald benchmark to measure power consumption. This invaluable benchmark has a vast amount of supporting literature. That so much could be written about one benchmark test tells you just how technical a subject this is. And in SNIA’s defense, it is creating a Quick Reference Guide for Emerald (coming soon).
But rather than getting into the nitty-gritty nuances of the tests, the purpose of this article is to provide a high-level overview of a few basic storage benchmarks, what value they might have and where you can find out more.
Read more here including some of my comments, tips and recommendations.
In addition to Drew’s benchmarking quick reference guide, along with the server storage I/O benchmarking tools, technologies and techniques resource page (here), check out this companion post as a primer for benchmarking and associated topics titled Server and Storage I/O Benchmarking 101 for Smarties.
How do you use Diskspd?
Tip: When you run Microsoft Diskspd it will create a file or data set on the device or volume being tested that it will do its I/O to, make sure that you have enough disk space for what will be tested (e.g. if you are going to test 1TB you need to have more than 1TB of disk space free for use). Another tip is to speed up the initializing (e.g. when Diskspd creates the file that I/Os will be done to) run as administrator.
Tip: In case you forgot, a couple of other useful Microsoft tools (besides Perfmon) for working with and displaying server storage I/O devices including disks (HDD and SSDs) are the commands "wmic diskdrive list [brief]" and "diskpart". With diskpart exercise caution as it can get you in trouble just as fast as it can get you out of trouble.
You can view the Diskspd commands after installing the tool and from a Windows command prompt type:
The above command will display Diskspd help and information about the commands as follows.
Usage: diskspd [options] target1 [ target2 [ target3 ...] ] version 2.0.12 (2014/09/17) Available targets: file_path # :
|-?||display usage information|
|-a#[,#[…]]||advanced CPU affinity – affinitize threads to CPUs provided after -a in a round-robin manner within current KGroup (CPU count starts with 0); the same CPU can be listed more than once and the number of CPUs can be different than the number of files or threads (cannot be used with -n)|
|-ag||group affinity – affinitize threads in a round-robin manner across KGroups|
|-b[K|M|G]||block size in bytes/KB/MB/GB [default=64K]|
|-B[K|M|G|b]||base file offset in bytes/KB/MB/GB/blocks [default=0] (offset from the beginning of the file)|
|-c[K|M|G|b]||create files of the given size. Size can be stated in bytes/KB/MB/GB/blocks|
|-C||cool down time – duration of the test after measurements finished [default=0s].|
|-D||Print IOPS standard deviations. The deviations are calculated for samples of duration . is given in milliseconds and the default value is 1000.|
|-d||duration (in seconds) to run test [default=10s]|
|-f[K|M|G|b]||file size – this parameter can be used to use only the part of the file/disk/partition for example to test only the first sectors of disk|
|-fr||open file with the FILE_FLAG_RANDOM_ACCESS hint|
|-fs||open file with the FILE_FLAG_SEQUENTIAL_SCAN hint|
|-F||total number of threads (cannot be used with -t)|
|-g||throughput per thread is throttled to given bytes per millisecond note that this can not be specified when using completion routines|
|-h||disable both software and hardware caching|
|-i||number of IOs (burst size) before thinking. must be specified with -j|
|-j||time to think in ms before issuing a burst of IOs (burst size). must be specified with -i|
|-I||Set IO priority to . Available values are: 1-very low, 2-low, 3-normal (default)|
|-l||Use large pages for IO buffers|
|-L||measure latency statistics|
|-n||disable affinity (cannot be used with -a)|
|-o||number of overlapped I/O requests per file per thread (1=synchronous I/O, unless more than 1 thread is specified with -F) [default=2]|
|-p||start async (overlapped) I/O operations with the same offset (makes sense only with -o2 or grater)|
|-P||enable printing a progress dot after each completed I/O operations (counted separately by each thread) [default count=65536]|
|-r[K|M|G|b]||random I/O aligned to bytes (doesn’t make sense with -s). can be stated in bytes/KB/MB/GB/blocks [default access=sequential, default alignment=block size]|
|-R||output format. Default is text.|
|-s[K|M|G|b]||stride size (offset between starting positions of subsequent I/O operations)|
|-S||disable OS caching|
|-t||number of threads per file (cannot be used with -F)|
|-T[K|M|G|b]||stride between I/O operations performed on the same file by different threads [default=0] (starting offset = base file offset + (thread number * ) it makes sense only with -t or -F|
|-w||percentage of write requests (-w and -w0 are equivalent). absence of this switch indicates 100% reads IMPORTANT: Your data will be destroyed without a warning|
|-W||warm up time – duration of the test before measurements start [default=5s].|
|-x||use completion routines instead of I/O Completion Ports|
|-X||use an XML file for configuring the workload. Cannot be used with other parameters.|
|-z||set random seed [default=0 if parameter not provided, GetTickCount() if value not provided]|
|Write buffers command options. By default, the write buffers are filled with a repeating pattern (0, 1, 2, …, 255, 0, 1, …)|
|-Z||zero buffers used for write tests|
|-Z[K|M|G|b]||use a global buffer filled with random data as a source for write operations.|
|-Z[K|M|G|b],||use a global buffer filled with data from as a source for write operations. If is smaller than , its content will be repeated multiple times in the buffer. By default, the write buffers are filled with a repeating pattern (0, 1, 2, …, 255, 0, 1, …)|
|Synchronization command options|
before starting the actual run (no warmup) (creates a notification event if does not exist)
|-yf||signals event after the actual run finishes (no cooldown) (creates a notification event if does not exist)|
|-yr||waits on event before starting the run (including warmup) (creates a notification event if does not exist)|
|-yp||allows to stop the run when event is set; it also binds CTRL+C to this event (creates a notification event if does not exist)|
|-ye||sets event and quits|
|Event Tracing command options|
|-ep||use paged memory for NT Kernel Logger (by default it uses non-paged memory)|
|-eq||use perf timer|
|-es||use system timer (default)|
|-ec||use cycle count|
|-ePROCESS||process start & end|
|-eTHREAD||thread start & end|
|-eDISK_IO||physical disk IO|
|-eMEMORY_PAGE_FAULTS||all page faults|
|-eMEMORY_HARD_FAULTS||hard faults only|
|-eNETWORK||TCP/IP, UDP/IP send & receive|
Where to learn more
The following are related links to read more about servver (cloud, virtual and physical) storage I/O benchmarking tools, technologies and techniques.
Drew Robb’s benchmarking quick reference guide
Server storage I/O benchmarking tools, technologies and techniques resource page
Server and Storage I/O Benchmarking 101 for Smarties.
Microsoft Diskspd download and Microsoft Diskspd overview (via Technet)
I/O, I/O how well do you know about good or bad server and storage I/Os?
Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I and Part II)
Wrap up and summary, for now…
This wraps up part-one of this two-part post taking a look at Microsoft Diskspd benchmark and workload generation tool. In part-two (here) of this post series we take a closer look including a test drive using Microsoft Diskspd.
Ok, nuff said (for now)
Published at DZone with permission of Greg Schulz, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.