The IO-500 has been developed together with the community and its development is still ongoing. The benchmark is essentially a benchmark suite bundled with execution rules. It harnesses existing and trusted open source benchmarks.
The goal for the benchmark is to capture user-experienced performance. It aims to be:
We publish multiple lists for each BoF at SC and ISC as well as maintaining the current most up-to-date lists. We publish a historic list of all submissions received and multiple filtered lists from the historic list. We maintain a Full List which is the subset of submissions which were valid according to the set of list-specific rules in place at the time of the list’s publication.
Our primary lists are Ranked Lists which show only opted-in submissions from the Full List and only the best submission per storage system. We have two ranked lists: the IO500 List for submissions which ran on any number of client nodes and the 10 Node Challenge list for only those submissions which ran on exactly ten client nodes.
In summary, for each BoF, we have the following lists:
The benchmark covers various workloads and computes a single score for comparison. The workloads are:
The individual performance numbers are preserved and accessible via the web or the raw data. This allows deriving other relevant metrics.
We are in the process to establish a procedure to extend the current workload with further meaningful metrics.
We welcome the promotion of the IO-500 using the logo.
IO-500 logo license terms
The IO-500 logo is copyrighted us but may be used under the following conditions:
If you are in doubt, contact the steering board.