SPEC CPU2006/SPEC CPU2017 Platform Settings for HPE ProLiant AMD-based systems
Operating System (OS) Application/Service Tuning:
The following OS tunes could've been applied to better optimize performance of some areas of the system:
- ulimit: Used to set user limits of system-wide resources. Provides control over resources available to the shell and processes started by it. Some common ulimit commands may include:
- ulimit -s [n | unlimited]: Set the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.
- ulimit -l (number): Set the maximum size that can be locked into memory.
- Performance Governors (Linux): In-kernel CPU frequency governors are pre-configured power schemes for the CPU. The CPUfreq governors use P-states to change frequencies and lower power consumption. The dynamic governors can switch between CPU frequencies, based on CPU utilization to allow for power savings while not sacrificing performance. To set the governor, use the following commmand: "cpupower frequency-set -r -g {desired_governor}"
- Disabling Linux services: Certain Linux services may be disabled to minimize tasks that may consume CPU cycles.
- irqbalance: Disabled through "service irqbalance stop". Depending on the workload involved, the irqbalance service reassigns various IRQ's to system CPUs. Though this service might help in some situations, disabling it can also help environments which need to minimize or eliminate latency to more quickly respond to events.
- tuned-adm: The tuned-adm tool is a command line interface for switching between different tuning profiles available to the tuned tuning daemon available in supported Linux distros. The default configuration file is located in /etc/tuned.conf and the supported profiles can be found in /etc/tune-profiles. Some profiles that may be available by default include: default, desktop-powersave, server-powersave, laptop-ac-powersave, laptop-battery-powersave, spindown-disk, throughput-performance, latency-performance, enterprise-storage. To set a profile, one can issue the command "tuned-adm profile (profile_name)". Here are details about relevant profiles:
- throughput-performance: Server profile for typical throughput tuning. This profile disables tuned and ktune power saving features, enables sysctl settings that may improve disk and network IO throughput performance, switches to the deadline scheduler, and sets the CPU governor to performance.
- latency-performance: Server profile for typical latency tuning. This profile disables tuned and ktune power saving features, enables the deadline IO scheduler, and sets the CPU governor to performance.
- enterprise-storage: Server profile to high disk throughput tuning. This profile disables tuned and ktune power saving features, enables the deadline IO scheduler, enables hugepages and disables disk barriers, increases disk readahead values, and sets the CPU governor to performance
OS Kernel Parameter Tuning:
The following Linux Kernel parameters were tuned to better optimize performance of some areas of the system:
- dirty_background_ratio: Set through "echo 40 > /proc/sys/vm/dirty_background_ratio". This setting can help Linux disk caching and performance by setting the percentage of system memory that can be filled with dirty pages.
- dirty_ratio: Set through "echo 8 > /proc/sys/vm/dirty_ratio". This setting is the absolute maximum amount of system memory that can be filled with dirty pages before everything must get committed to disk.
- ksm/sleep_millisecs: Set through "echo 200 > /sys/kernel/mm/ksm/sleep_millisecs". This setting controls how many milliseconds the ksmd (KSM daemon) should sleep before the next scan.
- khugepaged/scan_sleep_millisecs: Set through "echo 50000 > /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs". This setting controls how many milliseconds to wait in khugepaged is there is a hugepage allocation failure to throttle the next allocation attempt.
- swappiness: The swappiness value can range from 1 to 100. A value of 100 will cause the kernel to swap out inactive processes frequently in favor of file system performance, resulting in large disk cache sizes. A value of 1 tells the kernel to only swap processes to disk if absolutely necessary. This can be set through a command like "echo 1 > /proc/sys/vm/swappiness"
- numa_balancing: Disabled through "echo 0 > /proc/sys/kernel/numa_balancing". This feature will automatically migrate data on demand so memory nodes are aligned to the local CPU that is accessing data. Depending on the workload involved, enabling this can boost the performance if the workload performs well on NUMA hardware. If the workload is statically set to balance between nodes, then this service may not provide a benefit.
- Zone Reclaim Mode: Zone reclaim allows the reclaiming of pages from a zone if the number of free pages falls below a watermark even if other zones still have enough pages available. Reclaiming a page can be more beneficial than taking the performance penalties that are associated with allocating a page on a remote zone, especially for NUMA machines. To tell the kernel to free local node memory rather than grabbing free memory from remote nodes, use a command like "echo 1 > /proc/sys/vm/zone_reclaim_mode"
- Free the file system page cache: The command "echo 1> /proc/sys/vm/drop_caches" is used to free up the filesystem page cache.
- kernel/randomize_va_space, also known as ASLR (Address Space Layout Randomization): This setting can be used to select the type of process address space randomization. Defaults differ based on whether the architecture supports ASLR, whether the kernel was built with the CONFIG_COMPAT_BRK option or not, or the kernel boot options used. Possible settings:
- 0: Turn process address space randomization off.
- 1: Randomize addresses of mmap base, stack, and VDSO pages.
- 2: Additionally randomize the heap. (This is probably the default.)
- Disabling ASLR can make process execution more deterministic and runtimes more consistent. For more information see the randomize_va_space entry in the Linux sysctl documentation.
- Transparent Hugepages (THP): THP is an abstraction layer that automates most aspects of creating, managing, and using huge pages. It is designed to hide much of the complexity in using huge pages from system administrators and developers. Huge pages increase the memory page size from 4 kilobytes to 2 megabytes. This provides significant performance advantages on systems with highly contended resources and large memory workloads. If memory utilization is too high or memory is badly fragmented which prevents hugepages being allocated, the kernel will assign smaller 4k pages instead. Most recent Linux OS releases have THP enabled by default. THP usage is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/enabled. Possible values:
- never: entirely disable THP usage.
- madvise: enable THP usage only inside regions marked MADV_HUGEPAGE using madvise(3).
- always: enable THP usage system-wide. This is the default.
- THP creation is controlled by the sysfs setting /sys/kernel/mm/transparent_hugepage/defrag. Possible values:
- never: if no THP are available to satisfy a request, do not attempt to make any.
- defer: an allocation requesting THP when none are available get normal pages while requesting THP creation in the background.
- defer+madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3); for all other regions it's like "defer".
- madvise: acts like "always", but only for allocations in regions marked MADV_HUGEPAGE using madvise(3). This is the default.
- always: an allocation requesting THP when none are available will stall until some are made.
- An application that "always" requests THP often can benefit from waiting for an allocation until those huge pages can be assembled. For more information see the Linux transparent hugepage documentation.
Linux Huge Page settings:
If one prefers not to use Transparent Hugepages, one can always setup Huge Pages by following the below steps:
- Create a mount point for the huge pages: "mkdir /mnt/hugepages"
- The huge page file system needs to be mounted when the systems reboots. Add the following to a system boot configuration file before any services are started: "mount -t hugetlbfs nodev /mnt/hugepages"
- Set vm/nr_hugepages=N in your /etc/sysctl.conf file where N is the maximum number of pages the system may allocate.
- Reboot to have the changes take effect.
Note that further information about huge pages may be found in your Linux documentation file: /usr/src/linux/Documentation/vm/hugetlbpage.txt
Environment Variables:
The following Linux environment variables that could've possibly been tuned to better optimize performance of some areas of the system:
- GOMP_CPU_AFFINITY: Used to bind threads to specific CPUs. The variable should contain a space-separated or comma-separated list of CPUs. This list may contain different kinds of entries: either single CPU numbers in any order, a range of CPUs (M-N) or a range with some stride (M-N:S). CPU numbers are zero based. For example, GOMP_CPU_AFFINITY="0 3 1-2 4-15:2" will bind the initial thread to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12, and 14 respectively and then start assigning back from the beginning of the list. GOMP_CPU_AFFINITY=0 binds all threads to CPU 0. There is no libgomp library routine to determine whether a CPU affinity specification is in effect. As a workaround, language-specific library functions, e.g., getenv in C or GET_ENVIRONMENT_VARIABLE in Fortran, may be used to query the setting of the GOMP_CPU_AFFINITY environment variable. A defined CPU affinity on startup cannot be changed or disabled during the runtime of the application. If both GOMP_CPU_AFFINITY and OMP_PROC_BIND are set, OMP_PROC_BIND has a higher precedence. If neither has been set and OMP_PROC_BIND is unset, or when OMP_PROC_BIND is set to FALSE, the host system will handle the assignment of threads to CPUs.
- OMP_DYNAMIC: Dynamic adjustment of threads. Enable or disable the dynamic adjustment of the number of threads within a team. The value of this environment variable shall be TRUE or FALSE. If undefined, dynamic adjustment is disabled by default.
- OMP_SCHEDULE: How threads are scheduled. Allows to specify schedule type and chunk size. The value of the variable shall have the form: type[,chunk] where type is one of static, dynamic or guided. The optional chunk size shall be a positive integer. If undefined, dynamic scheduling and a chunk size of 1 is used.
- OMP_THREAD_LIMIT: Set the maximum number of threads. Specifies the number of threads to use for the whole program. The value of this variable shall be a positive integer. If undefined, the number of threads is not limited.
- MALLOC_CONF: This environment variable affects the execution of the allocation functions. If the environment variable MALLOC_CONF is set, the characters it contains will be interpreted as options.
Firmware Settings:
One or more of the following settings may have been set. If so, the "Platform Notes" section of the report will say so; and you can read below to find out more about what these settings mean.
- AMD SMT Option (Default = Enabled): This feature allows enabling or disabling of logical processor cores on processors supporting AMD SMT. When enabled, each physical processor core operates as two logical processor cores. When disabled, each physical core operates as only one logical processor core. Enabling this option can improve overall performance for applications that benefit from a higher processor core count.
- Thermal Configuration (Default = Optimal Cooling): This feature allows the user to select the fan cooling solution for the system. Values for this BIOS option can be:
- Optimal Cooling: Provides the most efficient solution by configuring fan speeds to the minimum required to provide adequate cooling.
- Increased Cooling: Will run fans at higher speeds to provide additional cooling. Increased Cooling should be selected when non-HPE storage controllers are cabled to the embedded hard drive cage, or if the system is experiencing thermal issues that cannot be resolved in another manner.
- Maximum Cooling: Will provide the maximum cooling available by this platform.
- Determinism Control (Default = Auto): This option allows the user to choose between an Auto and Manual mode for Determinism Control. Values for this BIOS option can be:
- Auto: The system will decide what Performance Determinism setting to use. Auto also uses the processor fused values for Determinism.
- Manual: Allows the user to select either Power Deterministic or Performance Deterministic as described below.
- Performance Determinism (Default = Performance Deterministic): This option allows the user to configure the AMD processor Determinism setting for AGESA ("AMD Generic Encapsulated Software Architecture", a bootstrap protocol by which system devices on AMD64-architecture mainboards are initialized) control or BIOS control. Values for this BIOS option can be:
- Performance Deterministic: This option allows for AGESA control and provides predictable (capped) performance across all processors of the same type in the system. This could cause the system to run at or report lower than expected power values.
- Power Deterministic: This option will not enable performance deterministic control and allows for BIOS control. This option will maximize performance with power limits defined by the system design and operate as close to the thermal design point (TDP) as possible.
- Package Power Limit Control Mode (Default = Auto): This is a per Processor Power Limit value applicable for all populated processors in the system. This can be set to limit the processor power to a certain value. Values for this BIOS option can be:
- Auto: Uses the default processor value.
- Manual: Lets the user set a power limit and exposes Package Power Limit value field that a user can enter a number into. If a number is entered higher than what the processor and/or system supports, the BIOS will limit the power to the max possible value.
- Workload Profile (Default = General Power Efficient Compute): This option allows a user to choose a workload profile that best fits the user`s needs. The workload profiles control many power and performance settings that are relevant to general workload areas. Values for this BIOS option can be:
- General Power Efficient Compute, General Peak Frequency Compute, General Throughput Compute, Virtualization - Power Efficient, Virtualization - Max Performance, Low Latency, Mission Critical, Transaction Application Processing, High Performance Compute (HPC), Decision Support, Graphic Processing, I/O Throughput, and Custom.
- Power Regulator (Default = Static High Performance Mode): This option can only be configured if the Workload Profile is set to Custom. This feature allows the user to select the following Power Regulator support:
- Dynamic Power Savings Mode: This mode allows the automatic variation of the processor speed and power usage based on processor utilization, resulting a reduction in overall power consumption with little or no impact on performance. It does not require OS support.
- Static Low Power Mode: This mode reduces the processor speed and power usage and guarantees a lower maximum power usage for the system.
- Static High Performance Mode: This mode allows the processors to run in their maximum power/performance state at all times, regardless of the OS power management policy.
- OS Control Mode: This mode allows the processors to run in their maximum power/performance state at all times unless the OS enables a power management policy.
- Minimum Processor Idle Power Core C-State (Default = C6 State): This option can only be configured if the Workload Profile is set to Custom. This feature selects the processor's lowest idle power state (C-state) that the operating system uses. The higher the C-state, the lower the power usage of that idle state (C6 is the lowest power idle state supported by the processor). Values for this setting can be:
- C6 State: While in C6, the core PLLs are turned off, the core caches are flushed and the core state is saved to the Last Level Cache. Power Gates are used to reduce power consumption to close to zero. C6 is considered an inactive core.
- C1E State: C1E is defined as the enhanced halt state. While in C1E no instructions are being executed. C1E considered an active core.
- No C-states: No C-states is defined as C0, which is defined as the active state. While in C0, instructions are being executed by the core.
- C-State Efficiency mode (Default = Enabled): This option allows to adjust the frequency along with change in C-State. Enabling will monitor the workload and modulate the frequency of the core to maintain a high C0 residency. It also has latency and power benefits when the CPU is not 100% utilized. Values for this BIOS setting can be:
- XGMI Force Link Width (Default = Auto): This option forces the value of XGMI link width to a value set by the user. Performance improvement should be observed while setting the correct value. Values for this BIOS setting can be:
- Memory Patrol Scrubbing (Default = Enabled): This option allows for correction of soft memory errors. Over the length of system runtime, the risk of producing multi-bit and uncorrected errors is reduced with this option. Values for this BIOS setting can be:
- Enabled: Correction of soft memory errors can occur during runtime.
- Disabled: Soft memory error correction is turned off during runtime.
- Memory Interleaving (Default = Enabled): This option allows the CPUs to increase the memory bandwidth for an application. Values for this BIOS setting can be:
- Enabled: Consecutive memory blocks, often cache lines, are read from the different memory banks which may result in increased throughput and increasing latency.
- Disabled: Consecutive memory blocks are in the same memory bank which may result in reduced throughput and increasing latency.
- NUMA memory domains per socket (Default = Auto): This option allows the user to divide the memory domains that each socket has into a certain number of NUMA memory domains for better memory bandwidth. Values for this BIOS setting can be:
- One memory domain per socket: Each processor socket will have one memory domain.
- Two memory domains per socket: Each processor socket will have two memory domains.
- Four memory domains per socket: Each processor socket will have four memory domains.
- Auto: The system will default to One memory domain per socket.
- Last-Level Cache (LLC) as NUMA Node (Default = Disabled): When enabled, this option allows the user to divide processor's cores into additional NUMA Nodes based on the L3 cache. Enabling this feature can increase performance for workloads that are NUMA aware and optimized. Values for this BIOS setting can be:
- Memory PStates (Default = Auto): This setting controls the power state support of the memory controllers. Values for this BIOS setting can be:
- Data Fabric C-State Enable (Default = Auto): Allows the Infinity Fabric to go into lower power states when idle, similar to CPU core C-States. There can be a delay changing back to full-power mode, causing latency jitter. In a low latency workload, or one core bursty I/O, one could disable this feature to achieve more performance with the tradeoff of higher power consumption. Values for this BIOS setting can be:
- Auto: Dynamically allow the Infinity Fabric to go to a lower-power state when the processor has entered C-States.
- Force Enabled: Force the Infinity Fabric to go to a lower-power state when the processor has entered C-States.
- Disabled: Do not allow the Infinity Fabric to go to a lower-power state when the processor has entered C-States.
- Infinity Fabric Power Management (Default = Enabled): When this feature is enabled, the EPYC processor will dynamically vary the clock frequency of the Infinity Fabric based on activity level. For NUMA optimized workloads, allowing the Infinity Fabric to run slower can lead to increased overall performance due to an increase in CPU boost. Disabling this feature may be necessary for latency sensitive workloads. Values for this BIOS setting can be:
- Disabled: Enable fixed Infinity Fabric P-State control
- Enabled: Dynamically switch Infinity Fabric P-State based on link usage.
- Infinity Fabric Performance State (Default = Auto): This feature allows for customizing the performance state (P-State) of the Infinity Fabric when Infinity Fabric Power Management is disabled. The default is Auto when Infinity Fabric Power Management is set to Enabled. P0 is ideal for latency sensitive or I/O centric workloads. P3 is ideal for achieving best core boost frequencies. Values for this BIOS setting can be:
- P0: Will force the Infinity Fabric and memory controllers into full-power mode, eliminating latency jitter. Highest performing Infinity Fabric P-State.
- P1: Next highest performing Infinity Fabric P-State, after P0.
- P2: Next highest performing Infinity Fabric P-State, after P1.
- P3: Minimum Infinity Fabric P-State.
- L1 HW Prefetcher (Default = Enabled): Use this option to disable L1 Stream HW prefetch feature. In some cases (e.g. workloads that are random in nature) setting this option to disabled can improve performance. However, most workloads will benefit from this setting being enabled as it gathers data and keeps the core pipeline busy. Values for this BIOS setting can be:
- L2 HW Prefetcher (Default = Enabled): Use this option to disable L2 Stream HW prefetch feature. In some cases (e.g. workloads that are random in nature) setting this option to disabled can improve performance. However, most workloads will benefit from this setting being enabled as it gathers data and keeps the core pipeline busy. Values for this BIOS setting can be:
Last updated March 10, 2022.