最後更新: 2018-01-08
CLI
diskspd32 / diskspd64 = CrystalDiskMark 的 cli mode
CLI Usage
diskspd32.exe [options] target1
Opts:
- -b<size>[K|M|G] block size[default=64K]
- -d<seconds> duration (in seconds) to run test [default=10s]
- -r<align>[K|M|G|b] random I/O aligned to <align> in bytes/KiB/MiB ...
- -v verbose mode
- -t<count> number of threads per target
Write Test
-w<percentage> Percentage of write requests (absence of this switch indicates 100% reads)
IMPORTANT: a write test will destroy existing data without a warning(file/drive !!)
-h disable both software caching and hardware "write" caching
-S disable software caching
Example
# Create 8192KB file and run read test on it for 1 second
diskspd32.exe -c8192K -d1 testfile.dat
# Create two 1GB files,
# set block size to 4KB,
# create 2 threads per file, affinitize threads
# (each file will have threads affinitized to both CPUs)
diskspd32.exe -c1G -b4K -t2 -d10 -a0,1 testfile1.dat testfile2.dat
Test inside VM
Read:
diskspd32.exe -c1G -d30 testfile.dat
Write:
diskspd32.exe -w100 -h -c2G -d30 testfile.dat
1.68 | 26.93 # io='threads' 4.89 | 78.32 # io='native'
Remark
當 cache='none' 時, 連 read 都勁慢
QxTy
Q = Queue Depth
= how many requests the drive has at one time.
= number of pending transactions to disk
QD 愈高, Latency 愈耐
- desktop: <4
- heavy io servers: ~16
T = Thread
how many processes are accessing the drive at once
CrystalDiskMark Test Mode
- Q8T8
- Q32T1
- Q1T1
Total number of jobs = Q X T
Q8T8 = 8 * 8 = 64
* SATA protocol is limited to just 32 => results may be off from expected
P.S.
NVMe can handle QD 65,000
SATA: NCQ(Native Command Queuing)
It allowing hard disk drives to internally optimize
the order in which received read and write commands are executed.
=> reduce the amount of unnecessary drive head movement
My test result
https://datahunter.org/hdd_speed