User Tools

Site Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
io500:rules:submission [2020/05/22 21:47] john_bentio500:rules:submission [2020/05/28 19:18] john_bent
Line 3: Line 3:
 The following rules should ensure a fair comparison of the IO-500 results between systems and configurations. They serve to reduce mistakes and improve accuracy. The following rules should ensure a fair comparison of the IO-500 results between systems and configurations. They serve to reduce mistakes and improve accuracy.
  
- 0. For ISC20, submission of test runs should use new C-Application io500 which automatically runs both the new C version and the existing bash version (following the rules for SC-19) to ensure the consistency of results between the two implementations. An exception to this rule is possible for submitters who have a legitimate reason by requesting an exception from the committee via comittee@io500.org. +For ISC20, submission of test runs should use new C-Application io500 which automatically runs both the new C version and the existing bash version (following the rules for SC-19) to ensure the consistency of results between the two implementations. An exception to this rule is possible for submitters who have a legitimate reason by requesting an exception from the committee via comittee@io500.org. 
  
 Details are provided in the https://github.com/VI4IO/io500-app/blob/master/README-ISC20.txt file. For the ISC20 list, we will use the results from whichever of the two runs had the higher overall score assuming there are no abnormalities found in our analysis across all submissions.  Assuming no abnormalities are found, for future lists we will require only one run.   Details are provided in the https://github.com/VI4IO/io500-app/blob/master/README-ISC20.txt file. For the ISC20 list, we will use the results from whichever of the two runs had the higher overall score assuming there are no abnormalities found in our analysis across all submissions.  Assuming no abnormalities are found, for future lists we will require only one run.  
Line 26: Line 26:
     - For the 10 node challenge, there must be exactly 10 physical nodes and at least one benchmark process must run on each     - For the 10 node challenge, there must be exactly 10 physical nodes and at least one benchmark process must run on each
     - The only exception to this rule is the find benchmark which may optionally use fewer nodes/processes     - The only exception to this rule is the find benchmark which may optionally use fewer nodes/processes
 +  - Each of the four main phases (IOR easy and hard, and mdtest easy and hard) has a subdirectory which can be precreated and tuned (e.g. using tools such as lfs_setstripe or beegfs_ctl); however, additional subdirectories within these subdirectories cannot be precreated.
  
 Please send any requests for changes to these rules or clarifying questions to our mailing list. Please send any requests for changes to these rules or clarifying questions to our mailing list.