====== About ====== The IO-500 has been developed together with the [[io500:about:steering|community]] and its development is still ongoing. The benchmark is essentially a benchmark suite bundled with execution rules. It harnesses existing and trusted open source benchmarks. The goal for the benchmark is to capture user-experienced performance. It aims to be: * Representative * Understandable * Scalable * Portable * Inclusive * Lightweight * Trustworthy ===== The Lists ===== We publish multiple lists for each BoF at SC and ISC as well as maintaining the current most up-to-date lists. We publish a **historic list** of all submissions received and multiple filtered lists from the historic list. We maintain a **Full List** which is the subset of submissions which were valid according to the set of [[io500:rules:submission|list-specific rules]] in place at the time of the list’s publication. Our primary lists are **Ranked Lists** which show only opted-in submissions from the **Full List** and only the best submission per storage system. We have two ranked lists: the **IO500 List** for submissions which ran on any number of client nodes and the **10 Node Challenge list** for only those submissions which ran on exactly ten client nodes. In summary, for each BoF, we have the following lists: * **Historic list**: all submissions ever received * **Full list**: the subset from the **historic list** that was valid * **IO500 List**: the subset from the **full list** with only the best submission per storage system * **10 Node Challenge List**: the subset from the **full list** with only the best submission per storage system ran on exactly ten nodes ===== Workloads ===== The benchmark covers various workloads and computes a single score for comparison. The workloads are: * IOEasy: Applications with well optimized I/O patterns * IOHard: Applications that require a random workload * MDEasy: Metadata/small objects * MDHard: Small files (3901 bytes) in a shared directory * Find: Finding relevant objects based on patterns The individual performance numbers are preserved and accessible via the web or the raw data. This allows deriving other relevant metrics. We are in the process to establish a procedure to extend the current workload with further meaningful metrics. ===== Further reading ===== * {{ :io500:about:io500-establishing.pdf|White paper: Establishing the IO-500 Benchmark}} * [[https://hps.vi4io.org/_media/research/publications/2018/dltvifiatikl18-the_virtual_institute_for_i_o_and_the_io_500.pdf|Poster: The Virtual Institute for I/O and the IO-500]] * See also various presentations on our [[io500:news|news page]]. ===== Using IO-500 Logo ===== We welcome the promotion of the IO-500 using the logo. **IO-500 logo license terms** The IO-500 logo is copyrighted us but may be used under the following conditions: - The logo is used for its intended purpose to promote the IO-500. You may use it - together with results obtained by using the IO-500 - with statements that you are using the benchmark - together with opinions about the benchmark - The appearance of the logo shall not be modified. You may change the file format and resolution. - The logo must be placed onto a white or gray background. If you are in doubt, contact the [[mailto:io-500-board@vi4io.org|steering board]]. {{ :io500:about:logo-io500.pdf|Download the logo as PDF}}