Supermicro Hyper A+ Server AS -2126HS-TN 429921 SPECjbb2015-MultiJVM max-jOPS
183124 SPECjbb2015-MultiJVM critical-jOPS
Tested by: Supermicro Test Sponsor: Supermicro Test location: San Jose, California Test date: September 12, 2024
SPEC license #: 001176 Hardware Availability: Oct-2024 Software Availability: Jul-2024 Publication: Thu Oct 10 13:03:35 EDT 2024
Benchmark Results Summary
 
Overall Throughput RT curve
Overall SUT (System Under Test) Description
VendorSupermicro
Vendor URLhttps://www.supermicro.com
System SourceSingle Supplier
System DesignationServer Rack
Total Systems1
All SUT Systems IdenticalYES
Total Nodes1
All Nodes IdenticalYES
Nodes Per System1
Total Chips2
Total Cores64
Total Threads128
Total Memory Amount (GB)1536
Total OS Images1
SW EnvironmentNon-virtual
 
Hardware hw_1
NameHyper A+ Server AS -2126HS-TN
VendorSupermicro
Vendor URLhttps://www.supermicro.com
AvailableOct-2024
ModelH14DSH
Form Factor2U
CPU NameAMD EPYC 9355
CPU Characteristics32 core, 3.55GHz, 256MB L3 Cache (Max. Boost Clock up to 4.4GHz)
Number of Systems1
Nodes Per System1
Chips Per System2
Cores Per System64
Cores Per Chip32
Threads Per System128
Threads Per Core2
Version1.1 08/28/2024
CPU Frequency (MHz)3550
Primary Cache32KB(I)+48KB(D) per core
Secondary Cache1MB (I+D) per core
Tertiary Cache256MB (I+D) on chip per chip
Other CacheNone
Disk1 x 3.84 TB NVMe PCIe Gen5.0
File Systembtrfs
Memory Amount (GB)1536
# and size of DIMM(s)24 x 64GB
Memory Details64GB 2Rx4 DDR5 6400 MHz, running at 6000 MHz.
# and type of Network Interface Cards (NICs)1 x 1 GbE NIC
Power Supply Quantity and Rating (W)2 x 1600w
Other HardwareNone
Cabinet/Housing/EnclosureNone
Shared DescriptionNone
Shared CommentNone
Notes
  • NA: The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown) is mitigated in the system as tested and documented.
  • Yes: The test sponsor attests, as of date of publication, that CVE-2017-5753 (Spectre variant 1) is mitigated in the system as tested and documented.
  • Yes: The test sponsor attests, as of date of publication, that CVE-2017-5715 (Spectre variant 2) is mitigated in the system as tested and documented.
Other Hardware network_1
NameNone
VendorNone
Vendor URLNone
VersionNone
AvailableNone
BitnessNone
NotesNone
Operating System os_1
NameSUSE Linux Enterprise Server 15 SP6
VendorSUSE
Vendor URLhttp://suse.com/
Version6.4.0-150600.23.17-default
AvailableJun-2024
Bitness64
NotesNone
Java Virtual Machine jvm_1
NameOracle Java SE 22.0.2
VendorOracle
Vendor URLhttp://oracle.com/
VersionJava HotSpot 64-bit Server VM, version 22.0.2
AvailableJul-2024
Bitness64
NotesNone
Other Software other_1
NameNone
VendorNone
Vendor URLNone
VersionNone
AvailableNone
BitnessNone
NotesNone
Hardware
OS Images os_Image_1(1)
Hardware Description hw_1
Number of Systems 1
SW Environment non-virtual
Tuning

BIOS Settings:

  • NUMA nodes per socket = NPS4
  • Determinism Control = Manual
  • Determinism Enable = Power
  • xGMI Link Configuration = 4 xGMI Links
  • 4 Link xGMI max speed = 32Gbps
  • TDP Control = Manual
  • TDP = 300
  • Package Power Limit Control = Manual
  • Package Power Limit = 300

Notes None
OS Image os_Image_1
JVM Instances jvm_Ctr_1(1), jvm_Backend_1(16), jvm_TxInjector_1(16)
OS Image Description os_1
Tuning

  • cpupower -c all frequency-set -g performance
  • tuned-adm profile throughput-performance
  • ulimit -n 1024000

  • echo 960000 > /proc/sys/kernel/sched_rt_runtime_us
  • echo 800000000 > /proc/sys/kernel/sched_latency_ns
  • echo 40000 > /proc/sys/kernel/sched_migration_cost_ns
  • echo 410000000 > /proc/sys/kernel/sched_min_granularity_ns
  • echo 2000000 > /proc/sys/kernel/sched_wakeup_granularity_ns
  • echo 9000 > /proc/sys/kernel/sched_nr_migrate
  • echo 10000 > /proc/sys/vm/dirty_expire_centisecs
  • echo 1500 > /proc/sys/vm/dirty_writeback_centisecs
  • echo 40 > /proc/sys/vm/dirty_ratio
  • echo 10 > /proc/sys/vm/dirty_background_ratio
  • echo 10 > /proc/sys/vm/swappiness
  • echo 0 > /proc/sys/kernel/numa_balancing
  • echo 0 > /proc/sys/vm/numa_stat
  • echo always > /sys/kernel/mm/transparent_hugepage/enabled
  • echo always > /sys/kernel/mm/transparent_hugepage/defrag

Notes None
JVM Instance jvm_Ctr_1
Parts of Benchmark Controller
JVM Instance Description jvm_1
Command Line

-Xms3g -Xmx3g -Xmn2g -XX:+UseParallelGC -XX:ParallelGCThreads=1 -XX:CICompilerCount=2

Tuning

Used numactl to interleave memory on all CPUs

  • numactl --interleave=all

Notes None
JVM Instance jvm_Backend_1
Parts of Benchmark Backend
JVM Instance Description jvm_1
Command Line

-Xms31g -Xmx31g -Xmn29g -XX:AllocatePrefetchInstr=2 -XX:+UseParallelGC -XX:ParallelGCThreads=8 -XX:LargePageSizeInBytes=2m -XX:-UseAdaptiveSizePolicy -XX:+AlwaysPreTouch -XX:+UseLargePages -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=95 -XX:MaxTenuringThreshold=15 -XX:InlineSmallCode=11k -XX:MaxGCPauseMillis=100 -XX:LoopUnrollLimit=200 -XX:+UseTransparentHugePages -XX:TLABAllocationWeight=2 -XX:ThreadStackSize=140 -XX:CompileThresholdScaling=120 -XX:CICompilerCount=4 -XX:AutoBoxCacheMax=32 -XX:OnStackReplacePercentage=100 -XX:TLABSize=1m -XX:MinTLABSize=1m -XX:-ResizeTLAB -XX:TLABWasteTargetPercent=1 -XX:TLABWasteIncrement=1 -XX:YoungPLABSize=1m -XX:OldPLABSize=1m

Tuning

Used numactl to affinitize each Backend JVM to 4 Core / 8 Threads

  • Group1: numactl --physcpubind=0-3,64-67 --localalloc
  • Group2: numactl --physcpubind=4-7,68-71 --localalloc
  • Group3: numactl --physcpubind=8-11,72-75 --localalloc
  • Group4: numactl --physcpubind=12-15,76-79 --localalloc
  • Group5: numactl --physcpubind=16-19,80-83 --localalloc
  • Group6: numactl --physcpubind=20-23,84-87 --localalloc
  • Group7: numactl --physcpubind=24-27,88-91 --localalloc
  • Group8: numactl --physcpubind=28-31,92-95 --localalloc
  • Group9: numactl --physcpubind=32-35,96-99 --localalloc
  • Group10: numactl --physcpubind=36-39,100-103 --localalloc
  • Group11: numactl --physcpubind=40-43,104-107 --localalloc
  • Group12: numactl --physcpubind=44-47,108-111 --localalloc
  • Group13: numactl --physcpubind=48-51,112-115 --localalloc
  • Group14: numactl --physcpubind=52-55,116-119 --localalloc
  • Group15: numactl --physcpubind=56-59,120-123 --localalloc
  • Group16: numactl --physcpubind=60-63,124-127 --localalloc
Notes None
JVM Instance jvm_TxInjector_1
Parts of Benchmark TxInjector
JVM Instance Description jvm_1
Command Line

-Xms3g -Xmx3g -Xmn2g -XX:+UseParallelGC -XX:ParallelGCThreads=1 -XX:CICompilerCount=2

Tuning

Used numactl to affinitize each Transaction Injector JVM to 4 Cores/ 8 Threads

  • Group1: numactl --physcpubind=0-3,64-67 --localalloc
  • Group2: numactl --physcpubind=4-7,68-71 --localalloc
  • Group3: numactl --physcpubind=8-11,72-75 --localalloc
  • Group4: numactl --physcpubind=12-15,76-79 --localalloc
  • Group5: numactl --physcpubind=16-19,80-83 --localalloc
  • Group6: numactl --physcpubind=20-23,84-87 --localalloc
  • Group7: numactl --physcpubind=24-27,88-91 --localalloc
  • Group8: numactl --physcpubind=28-31,92-95 --localalloc
  • Group9: numactl --physcpubind=32-35,96-99 --localalloc
  • Group10: numactl --physcpubind=36-39,100-103 --localalloc
  • Group11: numactl --physcpubind=40-43,104-107 --localalloc
  • Group12: numactl --physcpubind=44-47,108-111 --localalloc
  • Group13: numactl --physcpubind=48-51,112-115 --localalloc
  • Group14: numactl --physcpubind=52-55,116-119 --localalloc
  • Group15: numactl --physcpubind=56-59,120-123 --localalloc
  • Group16: numactl --physcpubind=60-63,124-127 --localalloc
Notes None
max-jOPS = jOPS passed before the First Failure
Pass/Fail Pass Pass Pass Fail Fail
jOPS 420774 425348 429921 434495 439068
critical-jOPS = Geomean ( jOPS @ 10000; 25000; 50000; 75000; 100000; SLAs )
Response time percentile is 99-th
SLA (us) 10000 25000 50000 75000 100000 Geomean
jOPS 119829 151633 188499 234017 256930 183124
  Percentile
  10-th 50-th 90-th 95-th 99-th 100-th
500us 9147 / 13721 4574 / 9147 - / 4574 - / 4574 - / 4574 - / 4574
1000us 59457 / 64031 18295 / 22868 4574 / 9147 4574 / 9147 - / 4574 - / 4574
5000us 297286 / 301860 118914 / 123488 86899 / 91473 82325 / 86899 68604 / 73178 9147 / 4574
10000us 320154 / 324728 260697 / 265271 164651 / 169224 155503 / 160077 123488 / 105193 13721 / 4574
25000us 333875 / 338449 311007 / 315580 260697 / 256123 233255 / 237829 160077 / 105193 13721 / 4574
50000us 343022 / 347596 324728 / 329301 292712 / 297286 269844 / 274418 201240 / 141783 13721 / 4574
75000us 347596 / 352170 329301 / 333875 301860 / 306433 297286 / 288139 246976 / 169224 13721 / 4574
100000us 352170 / 356743 338449 / 343022 315580 / 320154 301860 / 306433 269844 / 196666 13721 / 4574
200000us 375038 / 379611 352170 / 356743 343022 / 347596 338449 / 333875 329301 / 256123 178372 / 27442
500000us 425348 / 429921 397906 / 402479 375038 / 379611 365890 / 370464 361317 / 356743 297286 / 27442
1000000us 429921 / - 425348 / 429921 416200 / 420774 411627 / 416200 402479 / 407053 352170 / 27442
Probes jOPS / Total jOPS
Request Mix Accuracy
Note
(Actual % in the Mix - Expected % in the Mix) must be within:
'Main Tx' limit of +/-5.0% for the requests whose expected % in the mix is >= 10.0%
'Minor Tx' limit of +/-1.0% for the requests whose expected % in the mix is < 10.0%
There were no non-critical failures in Response Time curve building
Delay between status pings
IR/PR Accuracy
This section lists properties only set by user
Property Name Default Controller Group1.Backend.beJVM Group1.TxInjector.txiJVM1 Group10.Backend.beJVM Group10.TxInjector.txiJVM1 Group11.Backend.beJVM Group11.TxInjector.txiJVM1 Group12.Backend.beJVM Group12.TxInjector.txiJVM1 Group13.Backend.beJVM Group13.TxInjector.txiJVM1 Group14.Backend.beJVM Group14.TxInjector.txiJVM1 Group15.Backend.beJVM Group15.TxInjector.txiJVM1 Group16.Backend.beJVM Group16.TxInjector.txiJVM1 Group2.Backend.beJVM Group2.TxInjector.txiJVM1 Group3.Backend.beJVM Group3.TxInjector.txiJVM1 Group4.Backend.beJVM Group4.TxInjector.txiJVM1 Group5.Backend.beJVM Group5.TxInjector.txiJVM1 Group6.Backend.beJVM Group6.TxInjector.txiJVM1 Group7.Backend.beJVM Group7.TxInjector.txiJVM1 Group8.Backend.beJVM Group8.TxInjector.txiJVM1 Group9.Backend.beJVM Group9.TxInjector.txiJVM1
specjbb.comm.connect.client.pool.size 256 150 256 170 256 170 256 170 256 170 256 170 256 170 256 170 256 170 256 170 256 170 256 170 256 170 256 170 256 170 256 170 256 170
specjbb.comm.connect.selector.runner.count 0 10 20 5 20 5 20 5 20 5 20 5 20 5 20 5 20 5 20 5 20 5 20 5 20 5 20 5 20 5 20 5 20 5
specjbb.comm.connect.timeouts.connect 60000 900000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000
specjbb.comm.connect.timeouts.read 60000 800000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000
specjbb.comm.connect.timeouts.write 60000 800000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000 60000
specjbb.comm.connect.worker.pool.max 256 150 256 200 256 200 256 200 256 200 256 200 256 200 256 200 256 200 256 200 256 200 256 200 256 200 256 200 256 200 256 200 256 200
specjbb.comm.connect.worker.pool.min 1 16 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32
specjbb.controller.maxir.maxFailedPoints 3 1
specjbb.customerDriver.threads 64 {probe=120, saturate=120, service=70}
specjbb.forkjoin.workers 128 {Tier1=250, Tier2=10, Tier3=40}
specjbb.group.count 1 16
specjbb.mapreducer.pool.size 128 10
specjbb.txi.pergroup.count 1 1
View table in csv format
 
Level: COMPLIANCE
Check Agent Result
Check properties on compliance All PASSED
 
Level: CORRECTNESS
Check Agent Result
Compare SM and HQ Inventory All PASSED
High-bound (max attempted) is 457363 IR
High-bound (settled) is 453063 IR