OS Images |
os_Image_1(1)
|
Hardware Description |
hw_1
|
Number of Systems |
1
|
SW Environment |
non-virtual
|
Tuning |
- Logical Processor : Enabled
- L1 Stream HW Prefetcher : Disabled
- L2 Stream HW Prefetcher : Disabled
- MADT Core Enumeration : Linear
- NUMA Nodes Per Socket : 4
- L3 cache as NUMA Domain : Enabled
- System Profile : Custom
- Determinism Slider : Power Determinism
- Algorithm Performance Boost Disable (ApbDis) : Enabled
- ApbDis Fixed Socket P-State : P0
|
Notes |
None
|
|
JVM Instances |
jvm_Backend_1(32)
|
OS Image Description |
os_1
|
Tuning |
- echo 960000 > /proc/sys/kernel/sched_rt_runtime_us
- echo 20000000 > /proc/sys/kernel/sched_latency_ns
- echo 40000 > /proc/sys/kernel/sched_migration_cost_ns
- echo 810000000 > /proc/sys/kernel/sched_min_granularity_ns
- echo 200000000 > /proc/sys/kernel/sched_wakeup_granularity_ns
- echo 9000 > /proc/sys/kernel/sched_nr_migrate
- echo 10000 > /proc/sys/vm/dirty_expire_centisecs
- echo 1500 > /proc/sys/vm/dirty_writeback_centisecs
- echo 40 > /proc/sys/vm/dirty_ratio
- echo 10 > /proc/sys/vm/dirty_background_ratio
- echo 10 > /proc/sys/vm/swappiness
- echo 0 > /proc/sys/kernel/numa_balancing
- echo 0 > /proc/sys/vm/numa_stat
- echo always > /sys/kernel/mm/transparent_hugepage/enabled
- echo always > /sys/kernel/mm/transparent_hugepage/defrag
|
Notes |
None
|
Parts of Benchmark |
Backend
|
JVM Instance Description |
jvm_1
|
Command Line |
-Xms32736M -Xmx32736M -Xmn30736M -XX:AllocatePrefetchInstr=2 -XX:+UseParallelGC -XX:ParallelGCThreads=16 -XX:LargePageSizeInBytes=2m -XX:-UseAdaptiveSizePolicy -XX:+AlwaysPreTouch -XX:+UseLargePages -XX:SurvivorRatio=16 -XX:TargetSurvivorRatio=95 -XX:MaxTenuringThreshold=15 -XX:InlineSmallCode=11k -XX:MaxGCPauseMillis=100 -XX:LoopUnrollLimit=200 -XX:+UseTransparentHugePages -XX:TLABAllocationWeight=2 -XX:ThreadStackSize=140 -XX:CompileThresholdScaling=120 -XX:CICompilerCount=4 -XX:AutoBoxCacheMax=32 -XX:OnStackReplacePercentage=100 -XX:TLABSize=1m -XX:MinTLABSize=1m -XX:-ResizeTLAB -XX:TLABWasteTargetPercent=1 -XX:TLABWasteIncrement=1 -XX:YoungPLABSize=1m -XX:OldPLABSize=1m
|
Tuning |
Used numactl to affinitize each Backend JVM to 8 Core / 16 Threads- Group1: --physcpubind=0-7,256-263 --localalloc
- Group2: --physcpubind=8-15,264-271 --localalloc
- Group3: --physcpubind=64-71,320-327 --localalloc
- Group4: --physcpubind=72-79,328-335 --localalloc
- Group5: --physcpubind=32-39,288-295 --localalloc
- Group6: --physcpubind=40-47,296-303 --localalloc
- Group7: --physcpubind=96-103,352-359 --localalloc
- Group8: --physcpubind=104-111,360-367 --localalloc
- Group9: --physcpubind=48-55,304-311 --localalloc
- Group10: --physcpubind=56-63,312-319 --localalloc
- Group11: --physcpubind= 112-119,368-375 --localalloc
- Group12: --physcpubind= 120-127,376-383 --localalloc
- Group13: --physcpubind=16-23,272-279 --localalloc
- Group14: --physcpubind=24-31,280-287 --localalloc
- Group15: --physcpubind=80-87,336-343 --localalloc
- Group16: --physcpubind=88-95,344-351 --localalloc
- Group17: --physcpubind=128-135,384-391 --localalloc
- Group18: --physcpubind=136-143,392-399 --localalloc
- Group19: --physcpubind=192-199,448-455 --localalloc
- Group20: --physcpubind=200-207,456-463 --localalloc
- Group21: --physcpubind=160-167,416-423 --localalloc
- Group22: --physcpubind=168-175,424-431 --localalloc
- Group23: --physcpubind=224-231,480-487 --localalloc
- Group24: --physcpubind=232-239,488-495 --localalloc
- Group25: --physcpubind=176-183,432-439 --localalloc
- Group26: --physcpubind=184-191,440-447 --localalloc
- Group27: --physcpubind=240-247,496-503 --localalloc
- Group28: --physcpubind=248-255,504-511 --localalloc
- Group29: --physcpubind=144-151,400-407 --localalloc
- Group30: --physcpubind=152-159,408-415 --localalloc
- Group31: --physcpubind=208-215,464-471 --localalloc
- Group32: --physcpubind=216-223,472-479 --localalloc
|
Notes |
None
|
|