OS Images |
os_Image_1(1)
|
Hardware Description |
hw_1
|
Number of Systems |
1
|
SW Environment |
Non-Virtual
|
Tuning |
- Dram Refresh Delay - Performance
- DIMM Self Healing (Post Package Repair) on Uncorrectable Memory Error - Disabled
- Correctable Error Logging - Disabled
- Virtualization Technology - Disabled
- L2 Stream HW Prefetcher - Disabled
- NUMA Nodes Per Socket - 4
- L3 cache as NUMA Domain- Enabled
- TDP Control- Manual
- Customized cTDP - 500
- Customized PPT - 500
- System Profile - Custom
- CPU Power Management - Maximum Performance
- C-States - Disabled
- Determinism Control- Manual
- Determinism Slider - Power Determinism
|
Notes |
None
|
|
JVM Instances |
jvm_Ctr_1(1), jvm_Backend_1(32), jvm_TxInjector_1(32)
|
OS Image Description |
os_1
|
Tuning |
- echo 960000 > /proc/sys/kernel/sched_rt_runtime_us
- echo 800000000 > /proc/sys/kernel/sched_latency_ns
- echo 40000 > /proc/sys/kernel/sched_migration_cost_ns
- echo 900000000 > /proc/sys/kernel/sched_min_granularity_ns
- echo 700000000 > /proc/sys/kernel/sched_wakeup_granularity_ns
- echo 9000 > /proc/sys/kernel/sched_nr_migrate
- echo 10000 > /proc/sys/vm/dirty_expire_centisecs
- echo 1500 > /proc/sys/vm/dirty_writeback_centisecs
- echo 40 > /proc/sys/vm/dirty_ratio
- echo 10 > /proc/sys/vm/dirty_background_ratio
- echo 10 > /proc/sys/vm/swappiness
- echo 0 > /proc/sys/vm/numa_stat
- echo 0 > /proc/sys/kernel/numa_balancing
- echo always > /sys/kernel/mm/transparent_hugepage/enabled
- echo always > /sys/kernel/mm/transparent_hugepage/defrag
|
Notes |
None
|
Parts of Benchmark |
Controller
|
JVM Instance Description |
jvm_1
|
Command Line |
-Xms8g -Xmx8g -Xmn6g -XX:+UseParallelGC -XX:ParallelGCThreads=1 -XX:CICompilerCount=2
|
Tuning |
Used numactl to interleave memory on all CPUs
|
Notes |
None
|
Parts of Benchmark |
Backend
|
JVM Instance Description |
jvm_1
|
Command Line |
-Xms31g -Xmx31g -Xmn28g -XX:AllocatePrefetchInstr=2 -XX:+UseParallelGC -XX:ParallelGCThreads=16 -XX:LargePageSizeInBytes=2m -XX:-UseAdaptiveSizePolicy -XX:+AlwaysPreTouch -XX:+UseLargePages -XX:SurvivorRatio=15 -XX:TargetSurvivorRatio=95 -XX:MaxTenuringThreshold=12 -XX:InlineSmallCode=11k -XX:MaxGCPauseMillis=100 -XX:LoopUnrollLimit=200 -XX:+UseTransparentHugePages -XX:TLABAllocationWeight=2 -XX:ThreadStackSize=140 -XX:CompileThresholdScaling=120 -XX:CICompilerCount=4 -XX:AutoBoxCacheMax=32 -XX:OnStackReplacePercentage=100 -XX:-ResizeTLAB -XX:TLABWasteTargetPercent=1 -XX:TLABWasteIncrement=1 -XX:YoungPLABSize=1m -XX:OldPLABSize=1m -XX:+ScavengeBeforeFullGC -XX:PrefetchCopyIntervalInBytes=256 -XX:TLABSize=64k -XX:MinTLABSize=64k
|
Tuning |
Used numactl to affinitize each Backend JVM to 8 Core / 16 ThreadsGroup1: numactl --physcpubind=0-7,256-263 --localallocGroup2: numactl --physcpubind=8-15,264-271 --localallocGroup3: numactl --physcpubind=16-23,272-279 --localallocGroup4: numactl --physcpubind=24-31,280-287 --localallocGroup5: numactl --physcpubind=32-39,288-295 --localallocGroup6: numactl --physcpubind=40-47,296-303 --localallocGroup7: numactl --physcpubind=48-55,304-311 --localallocGroup8: numactl --physcpubind=56-63,312-319 --localallocGroup9: numactl --physcpubind=64-71,320-327 --localallocGroup10: numactl --physcpubind=72-79,328-335 --localallocGroup11: numactl --physcpubind=80-87,336-343 --localallocGroup12: numactl --physcpubind=88-95,344-351 --localallocGroup13: numactl --physcpubind=96-103,352-359 --localallocGroup14: numactl --physcpubind=104-111,360-367 --localallocGroup15: numactl --physcpubind=112-119,368-375 --localallocGroup16: numactl --physcpubind=120-127,376-383 --localallocGroup17: numactl --physcpubind=128-135,384-391 --localallocGroup18: numactl --physcpubind=136-143,392-399 --localallocGroup19: numactl --physcpubind=144-151,400-407 --localallocGroup20: numactl --physcpubind=152-159,408-415 --localallocGroup21: numactl --physcpubind=160-167,416-423 --localallocGroup22: numactl --physcpubind=168-175,424-431 --localallocGroup23: numactl --physcpubind=176-183,432-439 --localallocGroup24: numactl --physcpubind=184-191,440-447 --localallocGroup25: numactl --physcpubind=192-199,448-455 --localallocGroup26: numactl --physcpubind=200-207,456-463 --localallocGroup27: numactl --physcpubind=208-215,464-471 --localallocGroup28: numactl --physcpubind=216-223,472-479 --localallocGroup29: numactl --physcpubind=224-231,480-487 --localallocGroup30: numactl --physcpubind=232-239,488-495 --localallocGroup31: numactl --physcpubind=240-247,496-503 --localallocGroup32: numactl --physcpubind=248-255,504-511 --localalloc
|
Notes |
None
|
Parts of Benchmark |
TxInjector
|
JVM Instance Description |
jvm_1
|
Command Line |
-Xms8g -Xmx8g -Xmn6g -XX:+UseParallelGC -XX:ParallelGCThreads=1 -XX:CICompilerCount=2
|
Tuning |
Used numactl to affinitize each TxInjector JVM to 8 Core / 16 ThreadsGroup1: numactl --physcpubind=0-7,256-263 --localallocGroup2: numactl --physcpubind=8-15,264-271 --localallocGroup3: numactl --physcpubind=16-23,272-279 --localallocGroup4: numactl --physcpubind=24-31,280-287 --localallocGroup5: numactl --physcpubind=32-39,288-295 --localallocGroup6: numactl --physcpubind=40-47,296-303 --localallocGroup7: numactl --physcpubind=48-55,304-311 --localallocGroup8: numactl --physcpubind=56-63,312-319 --localallocGroup9: numactl --physcpubind=64-71,320-327 --localallocGroup10: numactl --physcpubind=72-79,328-335 --localallocGroup11: numactl --physcpubind=80-87,336-343 --localallocGroup12: numactl --physcpubind=88-95,344-351 --localallocGroup13: numactl --physcpubind=96-103,352-359 --localallocGroup14: numactl --physcpubind=104-111,360-367 --localallocGroup15: numactl --physcpubind=112-119,368-375 --localallocGroup16: numactl --physcpubind=120-127,376-383 --localallocGroup17: numactl --physcpubind=128-135,384-391 --localallocGroup18: numactl --physcpubind=136-143,392-399 --localallocGroup19: numactl --physcpubind=144-151,400-407 --localallocGroup20: numactl --physcpubind=152-159,408-415 --localallocGroup21: numactl --physcpubind=160-167,416-423 --localallocGroup22: numactl --physcpubind=168-175,424-431 --localallocGroup23: numactl --physcpubind=176-183,432-439 --localallocGroup24: numactl --physcpubind=184-191,440-447 --localallocGroup25: numactl --physcpubind=192-199,448-455 --localallocGroup26: numactl --physcpubind=200-207,456-463 --localallocGroup27: numactl --physcpubind=208-215,464-471 --localallocGroup28: numactl --physcpubind=216-223,472-479 --localallocGroup29: numactl --physcpubind=224-231,480-487 --localallocGroup30: numactl --physcpubind=232-239,488-495 --localallocGroup31: numactl --physcpubind=240-247,496-503 --localallocGroup32: numactl --physcpubind=248-255,504-511 --localalloc
|
Notes |
None
|
|