User Tools

Site Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
io500:rules:submission [2020/06/09 18:29] kunkelio500:rules:submission [2020/08/31 17:34] (current) kunkel
Line 1: Line 1:
 ====== Submission Rules ====== ====== Submission Rules ======
  
-The following rules should ensure a fair comparison of the IO-500 results between systems and configurations. They serve to reduce mistakes and improve accuracy.+The following rules should ensure a fair comparison of the IO500 results between systems and configurations. They serve to reduce mistakes and improve accuracy.
  
-For ISC20, submission of test runs should use new C-Application io500 which automatically runs both the new C version and the existing bash version (following the rules for SC-19) to ensure the consistency of results between the two implementations. An exception to this rule is possible for submitters who have a legitimate reason by requesting an exception from the committee via comittee@io500.org.  
  
-Details are provided in the https://github.com/VI4IO/io500-app/blob/master/README-ISC20.txt file. For the ISC20 list, we will use the results from whichever of the two runs had the higher overall score assuming there are no abnormalities found in our analysis across all submissions.  Assuming no abnormalities are found, for future lists we will require only one run.   +  - Submissions are made using the latest version of the IO500 application in GitHub and all binaries should be built according to the included build instructions. 
- +      - $ git clone https://github.com/io500/io500.git -b io500-sc20 
-  - Submissions are made using the latest version of the IO500 application in GitHub which runs both the existing bash version and the new C version and all binaries should be built according to the included build instructions. +
-      - $ git clone https://github.com/VI4IO/io500-app.git -b io500-isc20 +
-      - An exception to this rule is possible for submitters who have a legitimate reason by requesting an exception from the committee via comittee@io500.org. +
   - Read-after-write semantics: The system must be able to correctly read freshly written data from a different client node after the close operation on the writer has been completed.   - Read-after-write semantics: The system must be able to correctly read freshly written data from a different client node after the close operation on the writer has been completed.
     - All create/write phases must run for at least 300 seconds; the stonewall flag must be set to 300 which should ensure this.     - All create/write phases must run for at least 300 seconds; the stonewall flag must be set to 300 which should ensure this.
       - We defined a very high workload for all benchmarks that should satisfy this requirement but you may have to set higher values.       - We defined a very high workload for all benchmarks that should satisfy this requirement but you may have to set higher values.
-       - There can be no edits made to the source code including used codes such as IOR.+       - There can be no edits made to the source code including used codes such as IOR. An exception to this rule is possible for submitters who have a legitimate reason by requesting an exception from the committee via committee@io500.org.
   - The file names for the mdtest output files may not be pre-created.   - The file names for the mdtest output files may not be pre-created.
   - You must run all phases of the benchmark on a single storage system without interruption.   - You must run all phases of the benchmark on a single storage system without interruption.
 +  - There is no limitation on the number of storage nodes, the storage servers may optionally be co-located on the client nodes.
   - All data must be written to persistent storage within the measured time for the individual benchmark, e.g. if a file system caches data, it must ensure that data is persistently stored before acknowledging the close.   - All data must be written to persistent storage within the measured time for the individual benchmark, e.g. if a file system caches data, it must ensure that data is persistently stored before acknowledging the close.
   - Submitting the results must be done in accordance with the instructions on our submission page. Please verify the correctness of your submission before you submit it.   - Submitting the results must be done in accordance with the instructions on our submission page. Please verify the correctness of your submission before you submit it.
Line 21: Line 18:
     - It is not required to capture the list of matched files.     - It is not required to capture the list of matched files.
   - Please also refer to the README documents in the GitHub repo.   - Please also refer to the README documents in the GitHub repo.
-  - Please read the CHANGELOG.md file for the new changes on the IO-500 benchmark+  - Please read the CHANGELOG.md file for the new changes on the IO500 benchmark
   - Only submissions using at least 10 physical client nodes are eligible to win IO500 awards and at least one benchmark process must run on each   - Only submissions using at least 10 physical client nodes are eligible to win IO500 awards and at least one benchmark process must run on each
     - We accept results on fewer nodes for documentation purposes but they cannot be awarded.     - We accept results on fewer nodes for documentation purposes but they cannot be awarded.
     - Virtual machines can be used but the above rule must be followed. More than one virtual machine can be run on each physical node.     - Virtual machines can be used but the above rule must be followed. More than one virtual machine can be run on each physical node.
-    - For the 10 node challenge, there must be exactly 10 physical nodes and at least one benchmark process must run on each+    - For the 10 node challenge, there must be exactly 10 physical client nodes and at least one benchmark process must run on each client node. 
     - The only exception to this rule is the find benchmark which may optionally use fewer nodes/processes     - The only exception to this rule is the find benchmark which may optionally use fewer nodes/processes
   - Each of the four main phases (IOR easy and hard, and mdtest easy and hard) has a subdirectory which can be precreated and tuned (e.g. using tools such as lfs_setstripe or beegfs_ctl); however, additional subdirectories within these subdirectories cannot be precreated.   - Each of the four main phases (IOR easy and hard, and mdtest easy and hard) has a subdirectory which can be precreated and tuned (e.g. using tools such as lfs_setstripe or beegfs_ctl); however, additional subdirectories within these subdirectories cannot be precreated.
Line 31: Line 28:
 Please send any requests for changes to these rules or clarifying questions to our mailing list. Please send any requests for changes to these rules or clarifying questions to our mailing list.
  
-<html> 
-<!-- 
- 
- 
- 
- 
-  
-  - The latest version of io500.sh in GitHub must be used and all binaries should be built according to the included build instructions. 
-    - $ git clone https://github.com/VI4IO/io-500-dev io500-sc19 
-  - All required phases must be run and in the same order as they appear in the io500.sh script.   
-  - Read-after-write semantics: The system must be able to correctly read freshly written data from a different client after the close operation on the writer has been completed. 
-  - All create phases should run for at least 300 seconds; the stonewall flag must be set to 300 which should ensure this. 
-    - We defined a very high workload for all benchmarks that should satisfy this requirement but you may have to set higher values. 
-  - There can be no edits made to the io-500.sh script beyond changing the allowed variables and adding commands to configure the storage system (e.g. setting striping parameters).   
-    - For example, there can be no artificial delays added within the script such as calling ‘sleep’ between phases.   
-    - No edits are allowed to the utilities/io500_fixed.sh scripts.   
-  - The file names for the mdtest and IOR output files may not be pre-created. 
-  - You must run the benchmark on a single storage system. 
-  - All data must be written to persistent storage within the measured time for the individual benchmark,e.g. if a file system caches data, it must ensure that data is persistently stored before acknowledging the close. 
-  - Submitting the results must be done in accordance with the instructions on our [[https://www.vi4io.org/io500-submission|submission page]]. 
-  - If a tool other than the included pfind is used for the find phase, then it must follow the same input and output behavior as the included pfind. 
-    - It is not required to capture the list of matched files. 
-  - Please also refer to the [[https://github.com/VI4IO/io-500-dev/blob/master/doc/README.md|README]] documents in the github repo. 
-  - Please read the CHANGELOG.md file for the new changes on the IO-500 benchmark 
-  - For the 10 node challenge, there must be exactly 10 physical nodes and at least one benchmark process must run on each 
-    - Virtual machines can be used but the above rule must be followed.  More than one virtual machine can be run on each physical node. 
- 
- 
-Please send any requests for changes to these rules or clarifying questions to our mailing list. 
- 
-===== Allowed modifications ===== 
- 
-Inside the io500.sh script you can make the modifications as indicated by the script, in particular these are: 
-  - In the setup_directories() function, change in which directory to run and set directory options (e.g. using tools such as lfs_setstripe or beegfs_ctl to specify different stripe size for the IOR easy directory and the IOR hard directory)  Each of the four main phases (IOR easy and hard, and mdtest easy and hard) has a subdirectory on which these various options can be set; however, additional subdirectories within these subdirectories cannot be created.   
-  - In setup_paths() set the MPI arguments.  
-  - In setup_find(), you can choose the application that performs the find. 
-  - In extra_description(), you can add all the variables about the system information; this is useful when conducting many runs. While we query this information as well in the web submission, the web submission will read any value from the script. 
-  - Running with a stricter semantics such as O_DIRECT is allowed. 
-  - In the setup_ior|mdtest_easy|hard functions you can set the arguments for running the benchmarks. Details of allowed options are provided inside the script. As a rule of thumb, the hard benchmarks tolerate only options that do not change the access pattern. The easy patterns allow more changes as the intention is to show best-case performance (without explicit caching). 
- 
-Please email the mailing list or the steering board for any clarifications. 
---> 
-</html>