This shows you the differences between two versions of the page.
| — | hpsl:2021:jpn:riken:lustre [2021/06/05 13:06] (current) – created - external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| + | ====== FS: Lustre ====== | ||
| + | |||
| + | ===== Characteristics ===== | ||
| + | |||
| + | < | ||
| + | name:Lustre | ||
| + | </ | ||
| + | |||
| + | |||
| + | ===== Description ===== | ||
| + | |||
| + | The deployed file system is the Fujitsu Exabyte File System (FEFS), that is based on Lustre. | ||
| + | Effectively, | ||
| + | The local file system consists of 2592 OSS (5184 OSTs) and the global file system of 90 OSS (2880 OSTs); performance of the local file system is 3.2 TB/s and 1.4 TB/s for read and write, respectively [3]. This is much higher than the global file system which provides a throughput of 0.2 TB/s. | ||
| + | A staging mechanism transfers data before/ | ||
| + | The local file system can only be accessed from compute nodes and not from login or post-processing nodes. | ||
| + | |||
| + | ===== Measurement protocols ===== | ||
| + | |||
| + | ==== Peak performance ==== | ||
| + | |||
| + | |||
| + | ==== Sustained metadata performance ==== | ||
| + | |||
| + | The reported metadata rate has been measured using mdtest on 9000 nodes with 100 files/node. See [3, slide 34]. | ||
| + | Other benchmark runs have been conducted, for example, a mdtest on a variable number of clients shows how performance degraded with increase in clients and depends on the type of operation [2, slide 23]. | ||
| + | |||
| + | |||
| + | ==== Sustained performance ==== | ||
| + | |||
| + | This was measured with the IOR benchmark using POSIX-I/O (90 OSS, 2880 OSTs). | ||
| + | See [2] for a description. | ||