By: Ben Snaidero | Updated: 2017-03-24 | Comments (2) | Related: > Hardware
Problem
Whether you are implementing a new storage infrastructure or performing an upgrade/update to your existing hardware it's always good to have a tool that you can use to get a baseline measurement of the storage subsystem performance, so you can compare this baseline with the performance after you've made your changes. This tip will look at using the DiskSpd utility to gather these performance metrics.
Solution
There are many different tools you could use to gather these performance metrics, but the DiskSpd utility is good because it's well documented and really easy to use. I also like the fact that it's a command line tool so it makes it really easy to run multiple tests with different parameters if that is something you require. If you haven't used it before you can download it from Microsoft Technet.
Once you've downloaded the software you can extract the appropriate executable and copy it to the server you want to run the test against. Let's first run the utility from the command prompt with the "-?" switch.
C:\>diskspd.exe -?
This command will output a complete listing of all the parameters available along with a short description. Below is a brief description of each of the parameters we will use for this test but you can get a complete description from the documentation that is included in the .zip file you downloaded earlier.
Parameter |
Description |
---|---|
-b | Block size for reads/writes. For this test we will use 64K since this is mainly what SQL Server would use to read data. You could run multiple tests using different block sizes to simulate other SQL Server read/write operations |
-d | Test duration in seconds |
-Suw | Disables hardware and software buffering |
-L | Gather disk latency statistics |
-t | Number of threads per target. I keep this value to the number of cores on my server |
-W | Warmup duration. Number of seconds test runs before gathering statistics |
-w | Percentage of write requests, i.e. if set to 30% other 70% of IO test will be reads |
-c | Creates a test file of the specified size |
> diskperf.out | Output file to save generated statistics. If omitted statistics will be displayed on your screen |
Now that we know what each parameter does we can use the following command at the windows command prompt to start our test.
C:\>diskspd.exe -b64K -d600 -Suw -L -t8 -W30 -w20 -c10G C:\diskperftestfile.dat > diskperf.out
Once the test completes we can take a look at the results by opening the output file we specified above. The output itself is fairly self-explanatory, but let's go through and give an overview of each section individually.
The first section of the output displays the command used to initiate this test along with a summary of the commands.
Command Line: diskspd.exe -b64K -d600 -Suw -L -t8 -W30 -w20 -c10G C:\diskperftestfile.dat Input parameters: timespan: 1 ------------- duration: 600s warm up time: 30s cool down time: 0s measuring latency random seed: 0 path: 'C:\diskperftestfile.dat' think time: 0ms burst size: 0 software cache disabled hardware write cache disabled, writethrough on performing mix test (read/write ratio: 80/20) block size: 65536 using sequential I/O (stride: 65536) number of outstanding I/O operations: 2 thread stride size: 0 threads per file: 8 using I/O Completion Ports IO priority: normal
This next section gives you an overview of your CPU usage during your test. This will tell you if your storage is not the issue and there is a CPU bottleneck of some sort causing you not to get the best performance out of your storage subsystem. This is definitely not the issue in our case as you can see the CPU is ~98% idle.
actual test time: 600.00s thread count: 8 proc count: 8 CPU | Usage | User | Kernel | Idle ------------------------------------------- 0| 1.65%| 0.61%| 1.04%| 98.35% 1| 1.40%| 0.49%| 0.91%| 98.60% 2| 1.64%| 0.61%| 1.02%| 98.36% 3| 4.11%| 0.32%| 3.79%| 95.89% 4| 1.48%| 0.44%| 1.04%| 98.52% 5| 1.23%| 0.27%| 0.96%| 98.77% 6| 1.43%| 0.43%| 1.00%| 98.57% 7| 1.19%| 0.31%| 0.88%| 98.81% ------------------------------------------- avg.| 1.77%| 0.44%| 1.33%| 98.23%
This next section is the meat and potatoes of the report. Here we get a breakdown of the IOPS, MB/s and latency statistics for each thread in our test along with a summary. Note there is a separate table for Reads, Writes and Total IO.
Total IO thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file ---------------------------------------------------------------------------------------------------------------------------- 0 | 6983254016 | 106556 | 11.10 | 177.59 | 11.259 | 2.446 | C:\diskperftestfile.dat (10240MB) 1 | 6918569984 | 105569 | 11.00 | 175.95 | 11.364 | 2.598 | C:\diskperftestfile.dat (10240MB) 2 | 6918766592 | 105572 | 11.00 | 175.95 | 11.364 | 2.770 | C:\diskperftestfile.dat (10240MB) 3 | 7065501696 | 107811 | 11.23 | 179.68 | 11.128 | 2.204 | C:\diskperftestfile.dat (10240MB) 4 | 7078346752 | 108007 | 11.25 | 180.01 | 11.108 | 2.184 | C:\diskperftestfile.dat (10240MB) 5 | 6915096576 | 105516 | 10.99 | 175.86 | 11.370 | 2.332 | C:\diskperftestfile.dat (10240MB) 6 | 6920536064 | 105599 | 11.00 | 176.00 | 11.361 | 2.417 | C:\diskperftestfile.dat (10240MB) 7 | 6908018688 | 105408 | 10.98 | 175.68 | 11.381 | 2.377 | C:\diskperftestfile.dat (10240MB) ---------------------------------------------------------------------------------------------------------------------------- total: 55708090368 | 850038 | 88.55 | 1416.73 | 11.291 | 2.424 Read IO thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file ---------------------------------------------------------------------------------------------------------------------------- 0 | 5587075072 | 85252 | 8.88 | 142.09 | 11.042 | 2.391 | C:\diskperftestfile.dat (10240MB) 1 | 5529665536 | 84376 | 8.79 | 140.63 | 11.184 | 2.562 | C:\diskperftestfile.dat (10240MB) 2 | 5520490496 | 84236 | 8.77 | 140.39 | 11.179 | 2.837 | C:\diskperftestfile.dat (10240MB) 3 | 5650841600 | 86225 | 8.98 | 143.71 | 10.913 | 2.176 | C:\diskperftestfile.dat (10240MB) 4 | 5655363584 | 86294 | 8.99 | 143.82 | 10.854 | 2.091 | C:\diskperftestfile.dat (10240MB) 5 | 5527568384 | 84344 | 8.79 | 140.57 | 11.202 | 2.345 | C:\diskperftestfile.dat (10240MB) 6 | 5527896064 | 84349 | 8.79 | 140.58 | 11.186 | 2.440 | C:\diskperftestfile.dat (10240MB) 7 | 5529731072 | 84377 | 8.79 | 140.63 | 11.218 | 2.379 | C:\diskperftestfile.dat (10240MB) ---------------------------------------------------------------------------------------------------------------------------- total: 44528631808 | 679453 | 70.78 | 1132.42 | 11.096 | 2.414 Write IO thread | bytes | I/Os | MB/s | I/O per s | AvgLat | LatStdDev | file ---------------------------------------------------------------------------------------------------------------------------- 0 | 1396178944 | 21304 | 2.22 | 35.51 | 12.125 | 2.468 | C:\diskperftestfile.dat (10240MB) 1 | 1388904448 | 21193 | 2.21 | 35.32 | 12.079 | 2.616 | C:\diskperftestfile.dat (10240MB) 2 | 1398276096 | 21336 | 2.22 | 35.56 | 12.094 | 2.350 | C:\diskperftestfile.dat (10240MB) 3 | 1414660096 | 21586 | 2.25 | 35.98 | 11.986 | 2.103 | C:\diskperftestfile.dat (10240MB) 4 | 1422983168 | 21713 | 2.26 | 36.19 | 12.114 | 2.256 | C:\diskperftestfile.dat (10240MB) 5 | 1387528192 | 21172 | 2.21 | 35.29 | 12.037 | 2.155 | C:\diskperftestfile.dat (10240MB) 6 | 1392640000 | 21250 | 2.21 | 35.42 | 12.056 | 2.193 | C:\diskperftestfile.dat (10240MB) 7 | 1378287616 | 21031 | 2.19 | 35.05 | 12.036 | 2.250 | C:\diskperftestfile.dat (10240MB) ---------------------------------------------------------------------------------------------------------------------------- total: 11179458560 | 170585 | 17.77 | 284.31 | 12.066 | 2.305
Finally we have a summary table of per percentile latencies from our test. For this section the higher nine percentiles will sometimes show up as they do for the 6-nines row and above in this example. This is because there was not enough data from the test we performed to be able to differentiate these statistics.
%-ile | Read (ms) | Write (ms) | Total (ms) ---------------------------------------------- min | 0.135 | 1.112 | 0.135 25th | 9.696 | 10.647 | 9.861 50th | 10.940 | 11.912 | 11.122 75th | 12.260 | 13.275 | 12.490 90th | 13.563 | 14.634 | 13.832 95th | 14.415 | 15.507 | 14.726 99th | 16.426 | 17.637 | 16.794 3-nines | 32.362 | 28.716 | 31.543 4-nines | 58.255 | 45.651 | 56.318 5-nines | 141.810 | 130.510 | 140.661 6-nines | 189.805 | 183.253 | 189.805 7-nines | 189.805 | 183.253 | 189.805 8-nines | 189.805 | 183.253 | 189.805 9-nines | 189.805 | 183.253 | 189.805 max | 189.805 | 183.253 | 189.805
Now that you have a good baseline to refer back to you can rerun the same test any time you suspect there might be an issue with your storage hardware or after any sort of storage subsystem maintenance to confirm if in fact the performance has changed.
Next Steps
- Read other tips on tools for benchmarking IO operations
- Read more on tools for simulating SQL Server IO - SQLIOStress and SQLIO
About the author
This author pledges the content of this article is based on professional experience and not AI generated.
View all my tips
Article Last Updated: 2017-03-24