====== mpi-tile-io ======
---- dataentry benchmark ----
name : mpi-tile-io
layers_ttags : POSIX, MPI-IO
features_ttags : metadata
parallel_ttags : MPI
type_ttags : synthetic
license_ttag : MIT
webpage_url : http://www.mcs.anl.gov/research/projects/pio-benchmark/
----
The mpi-tile-io benchmark tests the IO-performance in a real world scenario. It test how it performs when its challenged with a dense 2D data layout.
===== Usage =====
After compiling the benchmark using ''mpicc'' simply call the tool with ''mpirun''.
Here are a few important ones:
* ''--nr_tiles_x'' number of tiles in x
* ''--nr_tiles_y'' number of tiles in y
* ''--sz_tile_x'' number of elements in x tile
* ''--sz_tile_y'' number of elements in y tile
* ''--sz_element'' size of element in bytes
* ''--filename'' name of the file (must exist)
===== Example Output =====
running mpirun -np 4 mpi-tile-io --nr_tiles_x 2 --nr_tiles_y 2 --sz_tile_x 256 --sz_tile_y 256 --sz_element 4096 --filename testfile
will result in a output like:
# mpi-tile-io run on Arch-PC
# 4 process(es) available, 4 used
# filename: testfile
# collective I/O off
# 0 byte header
# 512 x 512 element dataset, 4096 bytes per element
# 2 x 2 tiles, each tile is 256 x 256 elements
# tiles overlap by 0 elements in X, 0 elements in Y
# total file size is ~1024.00 Mbytes, 1 file(s) total.
# Times are total for all operations of the given type
# Open: min_t = 0.028317, max_t = 0.028376, mean_t = 0.028336, var_t = 0.000000
# Read: min_t = 0.178774, max_t = 0.185347, mean_t = 0.182057, var_t = 0.000009
# Close: min_t = 0.000100, max_t = 0.000116, mean_t = 0.000109, var_t = 0.000000
# Note: bandwidth values based on max_t (worst case)
Read Bandwidth = 5524.772 Mbytes/sec