(To check for possible updates to this document, please see http://www.spec.org/hpg/hpc2021/Docs/ )
Contents
1. Review Pre-requisites
2. Create destination. Have enough space; avoid space.
3. Mount the Benchmark ISO
4. Set your directory to the Benchmark ISO
5. Use install.sh
5.a. Destination selection
5.b. Toolset selection
5.c. The files are unpacked and tested
6. Source shrc or cshrc
7. Try to build one benchmark
8. Try running one benchmark with the test dataset
9. Try a real dataset
10. Try a full (reportable) run
Appendix 1: Uninstalling SPEChpc 2021
Note: links to SPEChpc 2021 documents on this web page assume that you are reading the page from a directory that also contains the other SPEChpc 2021 documents. If by some chance you are reading this web page from a location where the links do not work, try accessing the referenced documents at one of the following locations:
The SPEChpc 2021 suite has been tested under Linux. The benchmark environment may work with Mac OS X and other UNIX systems but has not been tested. Your benchmark can be installed under many operating systems.
Reminder: the SPEC license allows you to install on multiple systems as you may wish within your institution; but you may not share the software with the public.
The installation procedure for Unix, Linux, and Mac OS X is as follows:
Review the hardware and software requirements, in system-requirements.html
Create a directory on the destination disk. You should make sure that you have a disk that has at least 8GB free. (For more information on disk usage, see system-requirements.html.)
Don't put spaces in the path: even if you make it through the installation (doubtful), you are just asking for trouble, because there may be multiple programs from both SPEC and from your compiler that expect space to be an argument delimiter, not part of a path name. (This being the *Unix* install guide, you wouldn't have thought of using spaces in in the first place, would you?)
You can either burn a DVD of the benchmark ISO file, or you can just directly mount the benchmark ISO file you have downloaded. If you choose to mount the benchmark ISO file, the following examples may help you get it mounted. The examples assume the benchmark has been saved in the file hpc2021-1.1.n.iso (where "n" is the patch release number). The target location listed in these examples is /mnt but could be anything you have created.
After you are done installing, you may want to unmount the benchmark ISO. This can be done by making sure you are no longer in the install mount point and then issue the comman umount /mnt. This will unmount the filesystem. If you are on Solaris, you may also want to remove the lofi device that was created with lofiadm command. See the man page for further instructions.
AIX: | loopmount -i hpc2021-1.1.n.iso -o "-V cdrfs -o ro" -m /mnt |
Linux: | mount -t iso9660 -o ro,loop hpc2021-1.1.n.iso /mnt |
Solaris: | mount -F hsfs -o ro `lofiadm -a hpc2021-1.1.n.iso` /mnt |
If you have created a DVD, insert the the DVD, and, if necessary, issue a mount command for it. For many operating systems, the DVD will be automatically mounted. If not, you may have to enter an explicit mount command. If your operating system supports the Rock Ridge Interchange Protocol extensions to ISO 9660, be sure to select them, unless they are the default. The following examples are not intended to be comprehensive, but may get you started or at least give you clues which manpages to read:
AIX: | mount -v cdrfs -r /dev/cd0 /cdrom |
HP-UX: | mount -o cdcase /dev/disk/disk5 /mnt/cdrom/ |
Linux: | mount -t iso9660 -o ro,exec /dev/cdrom /mnt |
Solaris: | If Volume Management is running, you should find that the DVD is automatically mounted, as
/cdrom/label_of_volume/ If not, you should be able to mount it with commands similar to this:
mkdir /mnt1 mount -F hsfs -o ro /dev/dsk/c0t6d0s0 /mnt1 |
Virtual Machines | If you are running in a virtual machine, you will need to convince the host operating system to allow your guest OS to have access to the DVD. The means of accomplishing this will vary. For reference, the following worked with a Linux guest running under Virtual Box V4.0.6, with Windows 7 as the host: (1) Shut down the virtual machine (don't just pause it; tell it to run its shutdown procedure). (2) The Settings dialog should now be visible (it's grayed out if the machine state is not shut down). (3) Use Settings to configure the DVD drive as both available to the guest OS and as "passthrough". (4) Boot the virtual machine. (5) Log in. (6) Insert the DVD. (7) At this point, the DVD was automatically mounted as /media/SPECHPC. |
Note that you may need root privileges to mount the DVD or benchmark ISO.
If you haven't already done so by now, start a Terminal window (aka "command window", "shell", "console", "terminal emulator", "character cell window", "xterm", etc.) and issue a cd command to set your current working directory to the directory where the benchmark is mounted. The exact command will vary depending on the label on the media, the operating system, and the devices configured. It might look something like one of these:
$ cd /Volumes/SPECHPC $ cd /media/SPECHPC $ cd /dvdrom/HPC2021 $ cd /mnt
Type:
./install.sh
Depending on your installation type, you may be prompted for a destination directory:
SPEChpc 2021 Installation Top of the SPEChpc tree is '/Volumes/SPECHPC' Enter the directory you wish to install to (e.g. /usr/HPC2021) /local/home/jli/HPC2021
When answering the above question, note that you will have to use syntax acceptable to sh (so you might need to say something like "$HOME/mydir" instead of "~/mydir"). As mentioned above, don't use spaces.
Note: You can also specify the destination directory in the command
line, using the -d flag, for example, like this:
./install.sh -d
/local/home/jli/HPC2021
The installation procedure will show you the directories that will be used to install from and to. You will see a message such as this one:
Installing FROM /Volumes/SPECHPC Installing TO /local/home/jli/HPC2021 Is this correct? (Please enter 'yes' or 'no') yes
Enter "yes" if the directories match your expectations. If there is an error, enter "no", and the procedure will exit, and you can try again, possibly using the -d flag mentioned in the note above.
The installation procedure will attempt to automatically determine your current platform type (hardware architecture, operating system, etc.) In some cases, the tools may identify several candidate matches for your architecture.
You typically do not have to worry about whether the toolset is an exact match to your current environment, because the toolset selection does not affect your benchmark scores, and because the installation procedure does a series of tests to ensure that the selected tools work on your system.
Examples: (1) the installation procedure may determine that SPEC tools built on version "N" of
your operating system are entirely functional on version "N+3". (2) Tools built on one Linux distribution often
work correctly on another: notably, certain versions of SuSE are compatible, from a tools point of view, with certain
versions of RedHat. (3) Tools built on AMD chips with 64-bit instructions ("amd64") are compatible with Intel chips
that implement the same instruction set under the names "EM64T" or "Intel 64" (but not compatibie with chips that
implement the Itanium instruction set, abbreviated "ia64").
Mostly, you don't need to worry about all this, because the installation
procedure does a comprehensive set of tests to verify compatibility.
If at least one candidate match is found, you will see a message such as:
The following toolset is expected to work on your platform. If the automatically installed one does not work, please re-run install.sh and exclude that toolset using the '-e' switch. The toolset selected will not affect your benchmark scores. linux-x86_64 For x86_64 Linux systems Built on Oracle Linux 6.0 with GCC v4.4.4 20100726 (Red Hat 4.4.4-13)
If the installation procedure is unable to determine your system architecture, you will see a message such as:
We do not appear to have vendor supplied binaries for your architecture. You will have to compile the tool binaries by yourself. Please read the file /Volumes/SPECHPC/Docs/tools_build.html for instructions on how you might be able to build them.
If you see that message, please stop here, and examine the file tools-build.html.
Note: If the tools that are automatically installed on your system do not work, but you know that another set of tools that is in the list will work, you can exclude the ones that do not work. You may be instructed to do this during the first installation. Use the -e flag for install.sh, for example:
./install.sh -e linux-x86_64
The above will cause the tools for linux-x86_64 to be excluded from consideration.
Alternatively, you can explicitly direct which toolset is to be used with the -u flag for install.sh, for example:
./install.sh -u linux-x86_64-rhel8
The above will cause the tools for linux-x86_64-rhel8 to be installed, even if another toolset would have been chosen automatically. If you specify tools that do not work on your system, the installation procedure will stop without installing any tools.
libnsl.so:In order to be compatible with older Linux OS, the tools are linked against libnsl.so.1 which has since been depricated. However, most newer OSs install it for compatibility or simply link libnsl.so.1 to libnsl.so.2. If the 'linux-x86_64' tools package is used, but you get runtime errors from the tools, such as 'specperl', check to see is libnsl.so.1 is installed on your systes. Note that RHEL 8 does not provide this compatibility by default. You will need to install libnsl.so.1, or use the linux-x86_64-rhel8 toolset via "install.sh -u linux-x86_64-rhel8".
Thousands of files will be unpacked from the distribution media, and quietly installed on your destination disk. (If you would prefer to see them all named you can set VERBOSE=1 in your environment before installing the kit.) Various tests will be performed to verify that the files have been correctly installed, and that the tools work correctly. You should see summary messages such as these:
================================================================= Attempting to install the the linux-x86_64 toolset... <<-- or whatever toolset was selected Checking the integrity of your source tree... Checksums are all okay. Unpacking binary tools for linux-x86_64... <<-- your toolset Checking the integrity of your binary tools... Checksums are all okay. Testing the tools installation (this may take a minute) ........................................................................o....... ................................................................................ .......................................................... Installation successful. Source the shrc or cshrc in /local/home/jli/HPC2021 <<-- your directory to set up your environment for the benchmark.
At this point, you will have consumed about 800MB of disk space on the destination drive.
Change your current directory to the top-level SPEC directory and source either shrc or cshrc:
For example, if you are using a Bourne-compatible shell (such as ash, bash, ksh, zsh), you could type:
If you are using a csh-compatible shell, you could type:
The effect of the above commands is to set up environment variables and paths for SPEC.
From this point forward, we are testing basic abilities of the SPEChpc 2021 kit, including compiling benchmarks and running them. You may skip the remaining steps if all of the following are true:
Warning: even if someone else supplies binaries, you remain responsible for compliance with SPEC's Fair Use rule and the SPEChpc run rules.
Change to the config directory, and test that you can build a benchmark using a config file supplied for your system. For example:
$ cd $SPEC/config $ cp Example_nvhpc.cfg nv.cfg $ runhpc --config=nv.cfg --action=build --tune=base -ranks 40 505.lbm_t
The above command assumes that you can identify a config file (in the directory $SPEC/config) that is appropriate for you. In this case, the user started with Example_nvhpc.cfg. Your starting point will probably differ; here are some resources to help:
The "--tune=base" above indicates that we want to use only the simple tuning, if more than one kind of tuning is supplied in the config file. The "-ranks 40" indicates that we want to use 40 ranks.
Test that you can run a benchmark, using the minimal input set - the "test" workload. For example:
$ runhpc --config=nv.cfg -ranks 40 --size=test --noreportable --tune=base --iterations=1 505.lbm_t
The "--noreportable" ensures that the tools will allow us to run just a single benchmark instead of the whole suite, "--iterations=1" says just run the benchmark once.
Check the results in $SPEC/result
Test that you can run a benchmark using the real input set - the "reference" workload. For example:
$ runhpc --config=nv.cfg -ranks 40 --size=ref --noreportable --tune=base --iterations=1 505.lbm_t
Check the results in $SPEC/result.
If everything has worked up to this point, you may wish to start a full run, perhaps leaving your computer to run overnight. The extended test will demand significant resources from your machine, including computational power and memory of several types. In order to avoid surprises, before starting the reportable run, you should review system-requirements.html.
Have a look at runhpc.html to learn how to do a full run of the suite.
The command runhpc -h will give you a brief summary of the many options for runhpc.
To run a reportable run of the Small suite with simple (baseline) tuning:
$ runhpc --config=nv.cfg -ranks 40 --reportable --tune=base tiny
Here is a complete Linux installation, with interspersed commentary. This example follows the steps listed above. We assume that Steps 1 through 3 are already complete (the pre-requisites are met, we have enough space, the benchmark is mounted).
Step 4: Set the current working directory to the benchmark mount point:
$ cd /media/SPECHPC
Step 5: Invoke install.sh. When prompted, we enter the destination directory:
$ ./install.sh SPEC HPC Installation Top of the HPC tree is '/mount/HPC2021' Installing FROM /mount/HPC2021 Installing TO /local/home/jli/HPC2021 Is this correct? (Please enter 'yes' or 'no') yes The following toolset is expected to work on your platform. If the automatically installed one does not work, please re-run install.sh and exclude that toolset using the '-e' switch. The toolset selected will not affect your benchmark scores. linux-x86_64 For x86_64 Linux systems Built on Oracle Linux 6.0 with GCC v4.4.4 20100726 (Red Hat 4.4.4-13) ================================================================= Attempting to install the linux-x86_64 toolset... Checking the integrity of your source tree... Checksums are all okay. Unpacking binary tools for linux-x86_64... Checking the integrity of your binary tools... Checksums are all okay. Testing the tools installation (this may take a minute) ...................................................................................................................................................................................................................................................................................................................-....... Installation successful. Source the shrc or cshrc in /local/home/jli/HPC2021 to set up your environment for the benchmark.
Step 6: Now, we change the current working directory from the install media to the location of the new SPEChpc 2021 tree. Since this user has a Bourne compatible shell, shrc is sourced (for csh compatible shells, use cshrc).
Next, the config file Example_nvhpc.cfg has been picked as a starting point for this system.
$ cd /local/home/jli/HPC2021 $ . ./shrc $ cd config $ cp Example_nvhpc.cfg nv.cfg $ runhpc --config=nv.cfg --action=build --noreportable --tune=base --iterations=1 505.lbm_t SPEC HPC(r) 2021 Benchmark Suites Copyright 1995-2021 Standard Performance Evaluation Corporation (SPEC) runhpc v.unknown Using 'linux-x86_64' tools Reading file manifests... read 16870 entries from 2 files in 0.08s (200130 files/s) Loading runhpc modules................. Locating benchmarks...found 31 benchmarks in 5 benchsets. Reading config file '/local/home/jli/HPC2021/config/nv.cfg' Reading included config file '/local/home/jli/HPC2021/config/Example_SUT.inc' Retrieving flags file (/local/home/jli/HPC2021/config/flags/nvhpc_flags.xml)... 1 configuration selected: Action Benchmarks ------ ---------------------------------------------------------------------- build 505.lbm_t ------------------------------------------------------------------------------- Benchmarks selected: 505.lbm_t Compiling Binaries Building 505.lbm_t base nv_mpi: (build_base_nv_mpi.0000) [2021-08-01 14:08:37] specmake --output-sync -j 40 clean rm -rf *.o lbm.out find . \( -name \*.o -o -name '*.fppized.f*' -o -name '*.i' -o -name '*.mod' \) -print | xargs rm -rf rm -rf lbm rm -rf lbm.exe rm -rf core specmake --output-sync -j 40 build mpicc -c -o specrand/specrand.o -DSPEC -DNDEBUG -w -Mfprelaxed -Mnouniform -Mstack_arrays -fast specrand/specrand.c localrc.dev-sky5 has not changed mpicc -c -o main.o -DSPEC -DNDEBUG -w -Mfprelaxed -Mnouniform -Mstack_arrays -fast main.c mpicc -c -o lbm.o -DSPEC -DNDEBUG -w -Mfprelaxed -Mnouniform -Mstack_arrays -fast lbm.c mpicc -w -Mfprelaxed -Mnouniform -Mstack_arrays -fast lbm.o main.o specrand/specrand.o -lm -o lbm specmake --output-sync -j 40 options COMP: "mpicc -c -o options.o -DSPEC -DNDEBUG -w -Mfprelaxed -Mnouniform -Mstack_arrays -fast " C: CC="mpicc" C: COBJOPT="-c -o options" P: CPUFLAGS="-DSPEC -DNDEBUG" P: BENCH_FLAGS="" P: BENCH_CFLAGS="" O: OPTIMIZE="-w -Mfprelaxed -Mnouniform -Mstack_arrays -fast" O: COPTIMIZE="" P: PORTABILITY="" P: CPORTABILITY="" O: EXTRA_CFLAGS="" O: EXTRA_OPTIMIZE="" O: EXTRA_COPTIMIZE="" P: EXTRA_PORTABILITY="" P: EXTRA_CPORTABILITY="" LINK: "mpicc -w -Mfprelaxed -Mnouniform -Mstack_arrays -fast-lm -o options " C: LD="mpicc" O: OPTIMIZE="-w -Mfprelaxed -Mnouniform -Mstack_arrays -fast" C: MATH_LIBS="-lm" C: LDOUT="-o options" specmake --output-sync -j 40 compiler-version CC_VERSION_OPTION: nvc Rel 21.7 64-bit target on x86-64 Linux -tp skylake NVIDIA Compilers and Tools Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. Build successes for tiny: 505.lbm_t(base) Build errors for tiny: None Build Complete The log for this run is in /local/home/jli/HPC2021/result/hpc2021.001.log runhpc finished at 2021-08-01 14:08:49; 12 total seconds elapsed
Just above, various compile and link commands may or may not be echoed to your screen, depending on the settings in your config file. At this point, we've accomplished a lot. The SPEC tree is installed, and we have verified that a benchmark can be compiled using the C compiler.
Step 8: Now try running a benchmark, using the minimal test workload. The test workload runs in a tiny amount of time and does a minimal verification that the benchmark executable can at least start up:
$ runhpc --config=nv.cfg --action=run --noreportable --tune=base --size=test --iterations=1 -ranks=40 505.lbm_t SPEC HPC(r) 2021 Benchmark Suites Copyright 1995-2021 Standard Performance Evaluation Corporation (SPEC) runhpc v.unknown Using 'linux-x86_64' tools Reading file manifests... read 16870 entries from 2 files in 0.08s (203072 files/s) Loading runhpc modules................. Locating benchmarks...found 31 benchmarks in 5 benchsets. Reading config file '/local/home/jli/config/nv.cfg' Reading included config file '/local/home/jli/config/Example_SUT.inc' Retrieving flags file (/local/home/jli/config/flags/nvhpc_flags.xml)... 1 configuration selected: Action Run Mode Workload Report Type Benchmarks -------- -------- -------- --------------- ---------------------------- validate speed test SPEChpc2021_tny 505.lbm_t ------------------------------------------------------------------------------- Benchmarks selected: 505.lbm_t Compiling Binaries Up to date 505.lbm_t base nv_mpi Setting Up Run Directories Setting up 505.lbm_t test base nv_mpi: run_base_test_nv_mpi.0001 Running Benchmarks Running 505.lbm_t test base nv_mpi [2021-08-01 14:11:56] /local/home/jli/bin/specinvoke -d /local/home/jli/benchspec/HPC/505.lbm_t/run/run_base_test_nv_mpi.0001 -f speccmds.cmd -q -e speccmds.err -o speccmds.stdout /local/home/jli/bin/specinvoke -d /local/home/jli/benchspec/HPC/505.lbm_t/run/run_base_test_nv_mpi.0001 -f compare.cmd -E -e compare.err -o compare.stdout Success: 1x505.lbm_t Producing Raw Reports label: nv_mpi workload: test metric: SPEChpc2021_tny_base format: raw -> /local/home/jli/result/hpc2021_tny.002.tiny.test.rsf Parsing flags for 505.lbm_t base: done Doing flag reduction: done format: Text -> /local/home/jli/result/hpc2021_tny.002.tiny.test.txt The log for this run is in /local/home/jli/result/hpc2021.002.log runhpc finished at 2021-08-01 14:12:10; 14 total seconds elapsed
Notice about 15 lines up the notation "Success: 1x505.lbm_t". That is what we want to see.
Step 9: let's try running LBM with the Tiny reference workload using 40 ranks. This will take a while on the tested server running Linux.
$ runhpc --config=nv.cfg --action=run --noreportable --tune=base --size=ref --iterations=1 -ranks=40 505.lbm_t SPEC HPC(r) 2021 Benchmark Suites Copyright 1995-2021 Standard Performance Evaluation Corporation (SPEC) runhpc v.unknown Using 'linux-x86_64' tools Reading file manifests... read 16870 entries from 2 files in 0.09s (191617 files/s) Loading runhpc modules................. Locating benchmarks...found 31 benchmarks in 5 benchsets. Reading config file '/local/home/jli/config/nv.cfg' Reading included config file '/local/home/jli/config/Example_SUT.inc' Retrieving flags file (/local/home/jli/config/flags/nvhpc_flags.xml)... 1 configuration selected: Action Run Mode Workload Report Type Benchmarks -------- -------- -------- --------------- ---------------------------- validate speed ref SPEChpc2021_tny 505.lbm_t ------------------------------------------------------------------------------- Benchmarks selected: 505.lbm_t Compiling Binaries Up to date 505.lbm_t base nv_mpi Setting Up Run Directories Setting up 505.lbm_t ref base nv_mpi: run_base_ref_nv_mpi.0001 Running Benchmarks Running 505.lbm_t ref base nv_mpi [2021-08-01 14:15:24] /local/home/jli/bin/specinvoke -d /local/home/jli/benchspec/HPC/505.lbm_t/run/run_base_ref_nv_mpi.0001 -f speccmds.cmd -q -e speccmds.err -o speccmds.stdout /local/home/jli/bin/specinvoke -d /local/home/jli/benchspec/HPC/505.lbm_t/run/run_base_ref_nv_mpi.0001 -f compare.cmd -E -e compare.err -o compare.stdout Success: 1x505.lbm_t Producing Raw Reports label: nv_mpi workload: ref metric: SPEChpc2021_tny_base format: raw -> /local/home/jli/result/hpc2021_tny.003.tiny.ref.rsf Parsing flags for 505.lbm_t base: done Doing flag reduction: done format: Text -> /local/home/jli/result/hpc2021_tny.003.tiny.ref.txt The log for this run is in /local/home/jli/result/hpc2021.003.log runhpc finished at 2021-08-01 14:34:50; 1167 total seconds elapsed
Success with the Small workload! So now let's look in the result directory and see what we find:
$ cd result $ ls hpc2021.001.log hpc2021.002.log hpc2021.003.log hpc2021_tny.002.tiny.test.txt hpc2021_tny.002.tiny.test.rsf hpc2021_tny.003.tiny.ref.txt hpc2021_tny.003.small.ref.rsf lock.hpc2021 $ grep runhpc: *log hpc2021.001.log:runhpc: runhpc --config=nv.cfg --action=build --noreportable --tune=base --iterations=1 505.lbm_t hpc2021.002.log:runhpc: runhpc --config=nv.cfg --action=run --noreportable --tune=base --size=test --iterations=1 -ranks=40 505.lbm_t hpc2021.003.log:runhpc: runhpc --config=nv.cfg --action=run --noreportable --tune=base --size=ref --iterations=1 -ranks=40 505.lbm_t $
Notice the three separate sets of files: .001, .002, and .003
hpc2021.001.log has the log from the compile.
hpc2021.002.log has the log from running 505.lbm_t with the "test" input.
hpc2021.003.log has the log from running 505.lbm_t with the "ref" input.
Here is the complete .txt report from running 505.lbm_t ref:
$ cat hpc2021_tny.003.tiny.ref.txt ############################################################################################################ # INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN # # # # 'reportable' flag not set during run # # 534.hpgmgfv_t (base) did not have enough runs! # # 521.miniswp_t (base) did not have enough runs! # # 528.pot3d_t (base) did not have enough runs! # # 505.lbm_t (base) did not have enough runs! # # 518.tealeaf_t (base) did not have enough runs! # # 519.clvleaf_t (base) did not have enough runs! # # 532.sph_exa_t (base) did not have enough runs! # # 513.soma_t (base) did not have enough runs! # # 535.weather_t (base) did not have enough runs! # # # # INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN # ############################################################################################################ SPEChpc(TM) 2021 Tiny Result Mega Technology Big Compute Test Sponsor: Sponsor Name hpc2021 License: 9999 Test date: Aug-2021 Test sponsor: Sponsor Name Hardware availability: Nov-2099 Tested by: Testing Company Name Software availability: Nov-2099 Estimated Estimated Base Base Thrds Base Base Peak Peak Thrds Peak Peak Benchmarks Model Ranks pr Rnk Run Time Ratio Model Ranks pr Rnk Run Time Ratio -------------- ------ ------ ------ --------- --------- ------ ------ ------ --------- --------- 505.lbm_t MPI 40 1 1164 1.98 * 513.soma_t NR 518.tealeaf_t NR 519.clvleaf_t NR 521.miniswp_t NR 528.pot3d_t NR 532.sph_exa_t NR 534.hpgmgfv_t NR 535.weather_t NR ============================================================================================================ 505.lbm_t MPI 40 1 1164 1.98 * 513.soma_t NR 518.tealeaf_t NR 519.clvleaf_t NR 521.miniswp_t NR 528.pot3d_t NR 532.sph_exa_t NR 534.hpgmgfv_t NR 535.weather_t NR Est. SPEChpc 2021_tny_base -- Est. SPEChpc 2021_tny_peak Not Run BENCHMARK DETAILS ----------------- Type of System: Homogenous Cluster Total Compute Nodes: 2 Total Chips: 2 Total Cores: 128 Total Threads: 128 Total Memory: 512 GB Compiler: C/C++/Fortran: Version 21.7 of NVIDIA HPC SDK for Linux MPI Library: OpenMPI Version 4.0.5 Other MPI Info: None Other Software: None Base Parallel Model: MPI Base Ranks Run: 40 Base Threads Run: 1 Peak Parallel Models: Not Run Node Description: TurboBlaster 5000 =================================== HARDWARE -------- Number of nodes: 2 Uses of the node: compute Vendor: Mega Technology Model: Turblaster 5000 CPU Name: Turbo CPU CPU(s) orderable: 1 chips Chips enabled: 1 Cores enabled: 64 Cores per chip: 64 Threads per core: 1 CPU Characteristics: Turbo up to 3.4 GHz CPU MHz: 2250 Primary Cache: 32 KB I + 32 KB D on chip per core Secondary Cache: 512 KB I+D on chip per core L3 Cache: 256 MB I+D on chip per chip 16 MB shared / 4 cores Other Cache: None Memory: 256 GB (8 x 32 GB 2Rx8 PC4-3200AA-R) Disk Subsystem: 1 x 480 GB SATA 2.5" SSD Other Hardware: None Accel Count: 4 Accel Model: Tesla V100-PCIE-16GB Accel Vendor: NVIDIA Corporation Accel Type: GPU Accel Connection: PCIe 3.0 16x Accel ECC enabled: Yes Accel Description: See Notes Adapter: None Number of Adapters: 0 Slot Type: None Data Rate: None Ports Used: 0 Interconnect Type: None SOFTWARE -------- Adapter: None Adapter Driver: None Adapter Firmware: None Operating System: SUSE Linux Enterprise Linux Server 12 4.12.14-94.41-default Local File System: xfs Shared File System: None System State: Multi-user, run level 3 Other Software: None Node Description: NFS ===================== HARDWARE -------- Number of nodes: 1 Uses of the node: Fileserver Vendor: Big Storage Company Model: BG650 CPU Name: Intel Xeon Platinum 8280 CPU(s) orderable: 1-2 chips Chips enabled: 2 Cores enabled: 56 Cores per chip: 28 Threads per core: 1 CPU Characteristics: None CPU MHz: 2700 Primary Cache: 32 KB I + 32 KB D on chip per core Secondary Cache: 1 MB I+D on chip per core L3 Cache: 39424 KB I+D on chip per chip Other Cache: None Memory: 768 GB (24 x 32 GB 2Rx4 PC4-2933Y-R) Disk Subsystem: 1 x 1 TB 12 Gbps SAS 2.5" SSD (JBOD) Other Hardware: None Number of Adapters: 1 Slot Type: PCI-Express 3.0 x16 Data Rate: 100 Gb/s Ports Used: 1 Interconnect Type: BG 5000 series SOFTWARE -------- Adapter Driver: 10.9.1.0.15 Adapter Firmware: 10.9.0.1.0 Operating System: Red Hat Enterprise Linux Server release 7.6 Local File System: None Shared File System: NFS System State: Multi-User, run level 3 Other Software: None Interconnect Description: Big Interconnect Company ================================================== HARDWARE -------- Vendor: Big Interconnect Company Model: BI 100 Series Switch Model: BI 100 Series 48 Port 2 PSU Number of Switches: 1 Number of Ports: 48 Data Rate: 100 Gb/s Firmware: 10.3.0.0.60 Topology: Mesh Primary Use: MPI Traffic SOFTWARE -------- Submit Notes ------------ The config file option 'submit' was used. General Notes ------------- MPI startup command: mpirun command was used to start MPI jobs. Compiler Version Notes ---------------------- ============================================================================== CC 505.lbm_t(base) ------------------------------------------------------------------------------ nvc Rel Dev-r204824 64-bit target on x86-64 Linux -tp skylake NVIDIA Compilers and Tools Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. ------------------------------------------------------------------------------ Base Runtime Environment ------------------------ C benchmarks: 505.lbm_t: No flags used Base Compiler Invocation ------------------------ C benchmarks: 505.lbm_t: mpicc Base Optimization Flags ----------------------- C benchmarks: 505.lbm_t: -Mfprelaxed -Mnouniform -Mstack_arrays -fast Base Other Flags ---------------- C benchmarks: 505.lbm_t: -w SPEChpc is a trademark of the Standard Performance Evaluation Corporation. All other brand and product names appearing in this result are trademarks or registered trademarks of their respective holders. ############################################################################################################ # INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN # # # # 'reportable' flag not set during run # # 534.hpgmgfv_t (base) did not have enough runs! # # 521.miniswp_t (base) did not have enough runs! # # 528.pot3d_t (base) did not have enough runs! # # 505.lbm_t (base) did not have enough runs! # # 518.tealeaf_t (base) did not have enough runs! # # 519.clvleaf_t (base) did not have enough runs! # # 532.sph_exa_t (base) did not have enough runs! # # 513.soma_t (base) did not have enough runs! # # 535.weather_t (base) did not have enough runs! # # # # INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN -- INVALID RUN # ############################################################################################################ ------------------------------------------------------------------------------------------------------------- For questions about this result, please contact the tester. For other inquiries, please contact info@spec.org. Copyright 2021 Standard Performance Evaluation Corporation Tested with SPEChpc2021 v0.9.1 on 2021-08-01 14:15:23-0700. Report generated on 2021-08-01 14:34:50 by hpc2021 ASCII formatter v091 .
Done. The suite is installed, and we can run at least one benchmark for real (see the report of the time spent in 505.lbm_t above).
At this time, SPEC does not provide an uninstall utility for SPEChpc 2021. Confusingly, there is a file named uninstall.sh in the top directory, but it does not remove the whole product; it only removes the SPEC tool set, and does not affect the benchmarks (which consume the bulk of the disk space).
To remove SPEChpc 2021, use rm -Rf on the directory where you installed the suite, for example:
rm -Rf /local/home/jli/HPC2021
If you have been using the output_root feature, you will have to track those down separately. Therefore, prior to removing the tree, you might want to look for mentions of output root, for example:
Unix: cd $SPEC/config grep output_root *cfg
Note: instead of deleting the entire directory tree, some users find it useful to keep the config and result subdirectories, while deleting everything else.
Copyright 2014-2021 Standard Performance Evaluation Corporation
All Rights Reserved
q. Do you have to be root? Occasionally, users of Unix systems have asked whether it is necessary to elevate privileges, or to become 'root', prior to entering the above command. SPEC recommends (*) that you do not become root, because: (1) To the best of SPEC's knowledge, no component of SPEChpc 2021 needs to modify system directories, nor does any component need to call privileged system interfaces. (2) Therefore, if you find that it appears that there is some reason why you need to be root, the cause is likely to be outside the SPEC toolset - for example, disk protections, or quota limits. (3) For safe benchmarking, it is better to avoid being root, for the same reason that it is a good idea to wear seat belts in a car: accidents happen, humans make mistakes. For example, if you accidentally type:
when you meant to say:
then you will very grateful if you are not privileged at that moment.
(*) This is only a recommendation, not a requirement nor a rule.