User Tools

Site Tools


Submission Rules

The following rules should ensure a fair comparison of the IO500 results between systems and configurations. They serve to reduce mistakes and improve accuracy.

  1. Submissions are made using the latest version of the IO500 application in GitHub and all binaries should be built according to the included build instructions.
    1. $ git clone https://github.com/io500/io500.git -b io500-sc20
  2. Read-after-write semantics: The system must be able to correctly read freshly written data from a different client node after the close operation on the writer has been completed.
    1. All create/write phases must run for at least 300 seconds; the stonewall flag must be set to 300 which should ensure this.
      1. We defined a very high workload for all benchmarks that should satisfy this requirement but you may have to set higher values.
      2. There can be no edits made to the source code including used codes such as IOR. An exception to this rule is possible for submitters who have a legitimate reason by requesting an exception from the committee via committee@io500.org.
  3. The file names for the mdtest output files may not be pre-created.
  4. You must run all phases of the benchmark on a single storage system without interruption.
  5. There is no limitation on the number of storage nodes, the storage servers may optionally be co-located on the client nodes.
  6. All data must be written to persistent storage within the measured time for the individual benchmark, e.g. if a file system caches data, it must ensure that data is persistently stored before acknowledging the close.
  7. Submitting the results must be done in accordance with the instructions on our submission page. Please verify the correctness of your submission before you submit it.
  8. If a tool other than the included pfind is used for the find phase, then it must follow the same input and output behavior as the included pfind and the source code must be included in the submission.
    1. It is not required to capture the list of matched files.
  9. Please also refer to the README documents in the GitHub repo.
  10. Please read the CHANGELOG.md file for the new changes on the IO500 benchmark
  11. Only submissions using at least 10 physical client nodes are eligible to win IO500 awards and at least one benchmark process must run on each
    1. We accept results on fewer nodes for documentation purposes but they cannot be awarded.
    2. Virtual machines can be used but the above rule must be followed. More than one virtual machine can be run on each physical node.
    3. For the 10 node challenge, there must be exactly 10 physical client nodes and at least one benchmark process must run on each client node.
    4. The only exception to this rule is the find benchmark which may optionally use fewer nodes/processes
  12. Each of the four main phases (IOR easy and hard, and mdtest easy and hard) has a subdirectory which can be precreated and tuned (e.g. using tools such as lfs_setstripe or beegfs_ctl); however, additional subdirectories within these subdirectories cannot be precreated.

Please send any requests for changes to these rules or clarifying questions to our mailing list.