$Id$ | Latest: www.spec.org/hpg/hpc2021/Docs/ |
---|
Contents I. Introduction II. Hardware A. CPUs B. Memory C. Disk III. Software A. Operating System B. Compilers or binaries C. MPI and Ranks IV. Install media V. Don't be root. Turn privileges off. (Usually.) VI. Problems? |
|
SPEChpc 2021 includes source code and data sets for 9 benchmarks in 4 suites which each suite being a different workload size targeting different sized systems.
Short Tag |
Suite | Contents | Metrics | How many ranks? What do Higher Scores Mean? |
Tiny | SPEChpc 2021 Tiny Workload | 9 benchmarks | SPEChpc 2021_tny_base SPEChpc 2021_tny_peak |
The Tiny workloads use up to 60GB of memory and are
intended for use on a single node using between 1 and 256 ranks. More nodes and ranks may be used however higher rank counts may see lower scaling as MPI communication becomes more dominant.
Higher scores indicate that less time is needed. |
Small | SPEChpc 2021 Small Workload | 9 benchmarks | SPEChpc 2021_sml_base SPEChpc 2021_sml_peak |
The Small workloads use up to 480GB of memory and are intended for use on one or more nodes using between 64 and 1024 ranks. More ranks may be used however higher rank counts may see lower scaling as MPI communication becomes more dominant.
Higher scores indicate that less time is needed. |
Medium | SPEChpc 2021 Medium Workload | 6 benchmarks | SPEChpc 2021_med_base SPEChpc 2021_med_peak |
The Medium workloads use up to 4TB of memory and are intended for use on a mid-size cluster using between 256 and 4096 ranks. More ranks may be used however higher rank counts may see lower scaling as MPI communication becomes more dominant.
Higher scores indicate that less time is needed. |
Large | SPEChpc 2021 Large Workload | 6 benchmarks | SPEChpc 2021_lrg_base SPEChpc 2021_lrg_peak |
The Large workloads use up to 14.5TB of memory and are intended for use on a larger clusters using between 2048 and 32,768 ranks. More ranks may be used however higher rank counts may see lower scaling as MPI communication becomes more dominant.
Higher scores indicate that less time is needed. |
The "Short Tag" is the canonical abbreviation for use with runhpc, where context
is defined by the tools. In a published document, context may not be clear.
To avoid ambiguity in published documents, the Suite Name or the Metrics should be spelled as shown above. |
You may choose which suite(s) you would like to run (there is no requirement to run all of them), and your choice affects hardware requirements and memory
Having chosen a suite, if you will use your results in public, then you must run all the benchmarks in the suite (exceptions) and produce at least the base metric. The peak metric is optional. If producing both base and peak, you will need more disk space.
SPEC supplies toolsets for ARM, Power ISA, or x86_64.
Limitations apply:
Although SPEChpc suites are intended to be useful with a wide range of chip architectures, in some cases it is possible that you may find that your chip is not compatible with the available toolsets (for example, if your chip is too old to run one of the supported OS+chip combinations). See the section on Supported Tools.
The nominal main memory requirements for each workload are:
(In this section, 1 GB = 2^30 bytes and 1 TB = 2^40 bytes)
Warnings:
The disk space recommendations below are only estimates. Your environment may differ.
The SPEC tools (and optionally the output_root directory) must be installed on a shared network file system when using more than one node.
Estimated disk usage
50 GB |
| Each build and run (base or peak, different parallel models, different compiler flags, etc.) takes approximately 0.4 GB. | ||||||
---|---|---|---|---|---|---|---|---|
Your usage may differ, due to hardware, operating system, disk type, file system type, compiler, and,
especially, compiler tuning.
Once you know the space consumption pattern for your hw/sw, you might adjust the above calculations to be more accurate for your needs. |
The SPEChpc 2021 toolset relies on several open source components, including GNU Make, Perl, and others. SPEC supplies pre-built versions of these for particular combinations of hardware and operating system, as shown in the table below.
Supported Toolsets for SPEChpc 2021 | |
---|---|
Toolset | Intended Use |
linux-aarch64 | 64-bit AArch64 systems running Linux. |
linux-armv7l | Linux systems with ARM Cortex-A7-compatible CPUs. |
linux-ppc64le | 64-bit little-endian PowerPC Linux systems |
linux-x86_64 | x86_64 Linux systems, linked using libnsl.so.1 |
linux-x86_64-rhel8 | x86_64 Linux systems, linked using libnsl.so.2 |
Limitations apply: Although SPEC has tested the above, it is possible that you may enounter an OS+chip combination that is not compatible with its intended toolset. In such cases:
|
libnsl.so:In order to be compatible with older Linux OS, the tools are linked against libnsl.so.1 which has since been depricated. However, most newer OSs install it for compatibility or simply link libnsl.so.1 to libnsl.so.2. If the 'linux-x86_64' tools package is used, but you get runtime errors from the tools, such as 'specperl', check to see is libnsl.so.1 is installed on your systes.
Note that RHEL 8 does not provide this compatibility by default. You will need to install libnsl.so.1, or use the linux-x86_64-rhel8 toolset via "install.sh -u linux-x86_64-rhel8".
What about other systems? For systems that are not listed in the table of supported toolsets:
If the tools do not work, it might or might not be possible for you to build them yourself.
If you would like to try to build the tools, please see the document Building the SPEChpc 2021 Toolset.
SPEC may be able to provide advice for your build; however,
it will not always be practical for SPEC to do so.
You might not succeed.
SPEC supplies the benchmarks in source code form. Therefore, you will need either:
--or--
Config file
The SPEChpc benchmarks use the MPI-3 standard. Therefore, you will need a installation of MPI that supports MPI-3 and is configured for the compilers you indend to use.
You can use SPEChpc 2021 to measure the compute performance of a system (node) or multiple interconnected systems (cluster) with physical or virtual CPUs/processors. Optionally, attached accelerator devices may be used with OpenACC or OpenMP. You can choose to measure all of the processors on a system, one or more nodes cluster, or a subset.
Typical: All nodes using all physical CPUs
Usually SPEChpc has been used to measure entire clusters, with all of the physical CPU chips.
Alternatives: Accelerators, Single Node, partition, zone,...
There is no prohibition against using SPEChpc to measure a subset of the nodes on a cluster.
If you use a subset, it must have enough memory and disk. For public results, follow the usual rules. (Examples: use only methods that are documented, supported, and generally available to customers; fully disclose what you do, with sufficient detail so that the result can be reproduced; and if you enhance performance by doing something outside the subset
How many Ranks?
You can use SPEChpc 2021 to measure performance with a large numbers of nodes and ranks. However, since SPEChpc is strong scaled, you may see performance degradation as the MPI rank count is increased. As the number of nodes is increased, it is suggested to move to using the Medium or Large workloads.
SPEChpc 2021 approximate rank scaling by workload when using only MPI parallelism. | |||
---|---|---|---|
Tiny | Small | Medium | Large |
1-256 | 64-1024 | 256-4096 | 2048-32768 |
Note that this is only a suggested range. You may use a smaller number of ranks, provided the nodes have enough memory. Larger numbers of ranks can be used but you may see lower scaling.
With accelerators, scaling may decrease at smaller ranks counts.
With multiple CPU threads enabled, scaling may decrease at smaller ranks counts as well depending upon the number of threads used per rank.
Due to domain decomposition, some benchmarks may achieve better performance with a base-2 number of ranks. However, most benchmarks should be able to run at any arbitary number of ranks. The exceptions being MiniSweep which may give incorrect answers when using odd rank counts (except 1 and 3 ranks) and LBM which may give incorrect answers when using ranks counts 2 to 4 times the top scaling given above.
You should be familiar either with basic shell commands for Linux (ls, cp, mkdir ...).
You will need access to the SPEChpc 2021 installation media, typically as an ISO image or xz zipped tar package. The Installation Guides (Linux) explain how to use it.
On some systems, the mount command may require privileges or may require additional software. In such cases, you might need to burn a physical DVD using some other system; or, you might need to use the procedure described in the appendix to the Linux installation guide to extract a tarball and use that instead.
Please note that the SPEChpc 2021 license agreement does not allow you to post the SPEChpc 2021 software on any public server. If your institution has a SPEChpc 2021 license, then it's fine to post it on an internal server that is accessible only to members of your institution.
Usually, you do not need privileges. The one known exception is that during installation, on some systems it is possible that you might need a privileged user to mount the installation media or to allocate resources, for example disk space.
After installation is complete, you should not need privileges to run SPEChpc benchmarks. In general, it is recommended that you use an ordinary user account for SPEChpc; that way, if your config file accidentally tries to delete the wrong directory, you are much less likely to damage your system.
Warning: SPEChpc config files can execute arbitrary shell commands.
Read a config file before using it.
SPEC recommends:
In case of difficulties, please check the document SPEChpc 2021 Frequently Asked Questions.
SPEChpc™2021 System Requirements: Copyright © 2021 Standard Performance Evaluation Corporation (SPEC)