User Tools

Site Tools


About

The IO-500 has been developed together with the community and its development is still ongoing. The benchmark is essentially a benchmark suite bundled with execution rules. It harnesses existing and trusted open source benchmarks.

The goal for the benchmark is to capture user-experienced performance. It aims to be:

  • Representative
  • Understandable
  • Scalable
  • Portable
  • Inclusive
  • Lightweight
  • Trustworthy

The Lists

We publish multiple lists for each BoF at SC and ISC as well as maintaining the current most up-to-date lists. We publish a historic list of all submissions received and multiple filtered lists from the historic list. We maintain a Full List which is the subset of submissions which were valid according to the set of list-specific rules in place at the time of the list’s publication.

Our primary lists are Ranked Lists which show only opted-in submissions from the Full List and only the best submission per storage system. We have two ranked lists: the IO500 List for submissions which ran on any number of client nodes and the 10 Node Challenge list for only those submissions which ran on exactly ten client nodes.

In summary, for each BoF, we have the following lists:

  • Historic list: all submissions ever received
  • Full list: the subset from the historic list that was valid
  • IO500 List: the subset from the full list with only the best submission per storage system
  • 10 Node Challenge List: the subset from the full list with only the best submission per storage system ran on exactly ten nodes

Workloads

The benchmark covers various workloads and computes a single score for comparison. The workloads are:

  • IOEasy: Applications with well optimized I/O patterns
  • IOHard: Applications that require a random workload
  • MDEasy: Metadata/small objects
  • MDHard: Small files (3901 bytes) in a shared directory
  • Find: Finding relevant objects based on patterns

The individual performance numbers are preserved and accessible via the web or the raw data. This allows deriving other relevant metrics.

We are in the process to establish a procedure to extend the current workload with further meaningful metrics.

Further reading

We welcome the promotion of the IO-500 using the logo.

IO-500 logo license terms

The IO-500 logo is copyrighted us but may be used under the following conditions:

  1. The logo is used for its intended purpose to promote the IO-500. You may use it
    1. together with results obtained by using the IO-500
    2. with statements that you are using the benchmark
    3. together with opinions about the benchmark
  2. The appearance of the logo shall not be modified. You may change the file format and resolution.
  3. The logo must be placed onto a white or gray background.

If you are in doubt, contact the steering board.

Download the logo as PDF