Fio is a multithread capable io-performance testing tool.
There are a number of comandline options to modify the behavior of the test.
Here are a few important ones:
–rw=MODE
lets you specify your workloadwrite
sequential writesread
sequential readsrw
sequential reads and writes (default 50/50)–name=FILE
name of the job–size=BYTES
size of the I/O for the job–bs=BYTES
block size–direct=BOOL
Buffered(true) or non buffered(false) I/O–numjobs=NUMBER
number of threads–group_reporting
display the result of a group instead of every threadFor example runnning fio –rw=rw –name=testfile –size=128M –bs=128K –direct=1 –numjobs=4 –group_reporting results in the following output
1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 9.50512 s, 113 MB/s
Running dd if=/dev/zero of=test bs=512 count=2048 oflag=dsync will give you an rough idea about the latency and result in a output like this:
testfile: (g=0): rw=rw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=1 ... fio-2.19 Starting 4 processes Jobs: 2 (f=2): [_(2),M(2)][100.0%][r=35.7MiB/s,w=36.9MiB/s][r=285,w=295 IOPS][eta 00m:00s] testfile: (groupid=0, jobs=4): err= 0: pid=3213: Thu May 11 14:36:41 2017 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(261MiB/9167msec) clat (usec): min=554, max=532949, avg=11901.95, stdev=36977.09 lat (usec): min=555, max=532950, avg=11902.44, stdev=36977.09 clat percentiles (usec): | 1.00th=[ 572], 5.00th=[ 636], 10.00th=[ 780], 20.00th=[ 1128], | 30.00th=[ 1224], 40.00th=[ 1336], 50.00th=[ 1448], 60.00th=[ 1656], | 70.00th=[ 2672], 80.00th=[ 7712], 90.00th=[25984], 95.00th=[62208], | 99.00th=[199680], 99.50th=[261120], 99.90th=[444416], 99.95th=[477184], | 99.99th=[536576] write: IOPS=219, BW=27.5MiB/s (28.8MB/s)(251MiB/9167msec) clat (usec): min=572, max=941129, avg=5394.03, stdev=30216.75 lat (usec): min=581, max=941134, avg=5404.25, stdev=30216.74 clat percentiles (usec): | 1.00th=[ 676], 5.00th=[ 788], 10.00th=[ 876], 20.00th=[ 1272], | 30.00th=[ 1400], 40.00th=[ 1496], 50.00th=[ 1560], 60.00th=[ 1784], | 70.00th=[ 2128], 80.00th=[ 3280], 90.00th=[ 8640], 95.00th=[15040], | 99.00th=[64768], 99.50th=[108032], 99.90th=[171008], 99.95th=[831488], | 99.99th=[937984] lat (usec) : 750=6.25%, 1000=5.57% lat (msec) : 2=53.86%, 4=12.33%, 10=9.50%, 20=4.86%, 50=3.74% lat (msec) : 100=2.12%, 250=1.39%, 500=0.32%, 750=0.02%, 1000=0.05% cpu : usr=0.09%, sys=0.48%, ctx=4118, majf=0, minf=47 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwt: total=2087,2009,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=28.5MiB/s (29.9MB/s), 28.5MiB/s-28.5MiB/s (29.9MB/s-29.9MB/s), io=261MiB (274MB), run=9167-9167msec WRITE: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=251MiB (263MB), run=9167-9167msec Disk stats (read/write): sda: ios=2075/2001, merge=0/6, ticks=24774/11073, in_queue=35846, util=99.05%