The IO-500 has been developed together with the community and its development is still ongoing. The benchmark is essentially a benchmark suite bundled with execution rules. It harnesses existing and trusted open source benchmarks.
The goal for the benchmark is to capture user-experienced performance. It aims to be:
There are several lists available that show the submissions. We provide two three types of lists the ranked list, the full list, and derived list. Any submission will always be visible in the full list.
For a ranked list, however, there are conditions that disqualify submissions. Ranked lists are used in a competition for an award while any other list serves the purpose of information. Firstly, a submitter must permit the use in a specific ranked list – this allows to submit smaller submissions without competing with big systems. Secondly, a list may require certain conditions, for example, the 10 node challenge in 2018 required that 10 client nodes were used. The primary list of the IO-500 is the (ranked) IO-500 list, the prestigious award list we are about.
A derived list uses some kind of metrics to reorder submissions or hide specific conditions, e.g., fastest systems with HDDs.
The benchmark covers various workloads and computes a single score for comparison. The workloads are:
The individual performance numbers are preserved and accessible via the web or the raw data. This allows deriving other relevant metrics.
We are in the process to establish a procedure to extend the current workload with further meaningful metrics.
We welcome the promotion of the IO-500 using the logo.
IO-500 logo license terms
The IO-500 logo is copyrighted us but may be used under the following conditions:
If you are in doubt, contact the steering board.