For a while now I've been wanting to experiment with what happens when, instead of having either a global runqueue - the way BFS did, or per CPU runqueues - the way MuQSS currently does, we made runqueues shared depending on CPU architecture topology.
Given the fact that Simultaneous MultiThreaded - SMT siblings (hyperthread) are actually on the one physical core and share virtually all resources, then it is almost free, at least at the hardware level, for processes or threads to bounce between the two (or more) siblings. This obviously doesn't take into account the fact that the kernel itself has many unique structures for each logical CPU and that sharing there is not really free. Additionally it is interesting to see what happens if we extend that thinking to CPUs that only share cache, such as MultiCore - MC siblings. Today's modern CPUs are virtually all a combination of one and/or the other shared types above.
At least theoretically, there could be significant advantages to decreasing the number of runqueues for the overhead effects they have, and the decreased latency we'd get from guaranteeing access to more processes on a per-CPU basis with each scheduling decision. From the throughput side, the decreased overhead would also be helpful at the potential expense of slightly more spinlock contention - the more shared runqueues, the more contention, but if the amount of sharing is kept small it should be negligible. From the actual sharing side, given the lack of a formal balancing system in MuQSS, sharing the logical CPUs that are cheapest to switch/balance to should automatically improve throughput for certain workloads. Additionally, with SMT sharing, if light workloads can be bound to just two threads on the same core, there could be better cpu speed consolidation and substantial power saving advantages.
To that end, I've created experimental code for MuQSS that does this exact thing in a configurable way. You can configure the scheduler to share by SMT siblings or MC siblings. Only the runqueue locks and the process skip lists are actually shared. The rest of the runqueue structures at this stage are all still discrete per logical CPU.
Here is a git tree based on 4.14 and the current 0.162 version of MuQSS:
4.14-muqss-rqshare
And for those who use traditional patches, here is a patch that can be applied on top of a muqss-162 patched kernel:
0001-Implement-the-ability-to-share-runqueues-when-CPUs-a.patch
While so far only being a proof of concept, there are some throughput workloads that seem to benefit when sharing is kept to SMT siblings - specifically when there is only enough work for real cores, there is a demonstrable improvement. Latency is more consistently kept within bound levels. But it's not all improvement with some workloads showing slightly lower throughput. When sharing is moved to MC siblings, the results are mixed, and it changes dramatically depending on how many cores you have. Some workloads benefit a lot, while others suffer a lot. Worst case latency improves the more sharing that is done, but in its current rudimentary form there is very little to keep tasks bound to one CPU and with the highly variable CPU frequencies of today's CPUs and the need to bind tasks for an extended period to one CPU to allow the CPU to throttle up, throughput suffers when loads are light. Conversely they seem to improve quite a lot at heavy loads.
Either way, this is pretty much an "untuned" addition to MuQSS, and for my testing at least, I think the SMT siblings sharing is advantageous and have been running it successfully for a while now.
Regardless, if you're looking for something to experiment with, as MuQSS is more or less stable these days, it should be worth giving this patch a try and see what you find in terms of throughput and/or latency. As with all experimental patches, I cannot guarantee the stability of the code, though I am using it on my desktop myself. Note that CPU load reporting is likely to be off. Make sure to report back any results you have!
Enjoy!
お楽しみください
A development blog of what Con Kolivas is doing with code at the moment with the emphasis on linux kernel, MuQSS, BFS and -ck.
Showing posts with label latency. Show all posts
Showing posts with label latency. Show all posts
Friday, 24 November 2017
Tuesday, 22 November 2016
linux-4.8-ck8, MuQSS version 0.144
Here's a new release to go along with and commemorate the 4.8.10 stable release (they're releasing stable releases faster than my development code now.)
linux-4.8-ck8 patch:
patch-4.8-ck8.lrz
MuQSS by itself:
4.8-sched-MuQSS_144.patch
There are a small number of updates to MuQSS itself.
Notably there's an improvement in interactive mode when SMT nice is enabled and/or realtime tasks are running, or there are users of CPU affinity. Tasks previously would not schedule on CPUs when they were stuck behind those as the highest priority task and it would refuse to schedule them transiently.
The old hacks for CPU frequency changes from BFS have been removed, leaving the tunables to default as per mainline.
The default of 100Hz has been removed, but in its place a new and recommended 128Hz has been implemented - this just a silly microoptimisation to take advantage of the fast shifts that /128 has on CPUs compared to /100, and is close enough to 100Hz to behave otherwise the same.
For the -ck patch only I've reinstated updated and improved versions of the high resolution timeouts to improve behaviour of userspace that is inappropriately Hz dependent allowing low Hz choices to not affect latency.
Additionally by request I've added a couple of tunables to adjust the behaviour of the high res timers and timeouts.
/proc/sys/kernel/hrtimer_granularity_us
and
/proc/sys/kernel/hrtimeout_min_us
Both of these are in microseconds and can be set from 1-10,000. The first is how accurate high res timers will be in the kernel and is set to 100us by default (on mainline it is Hz accuracy).
The second is how small to make a request for a "minimum timeout" generically in all kernel code. The default is set to 1000us by default (on mainline it is one tick).
I doubt you'll find anything useful by tuning these but feel free to go nuts. Decreasing the second tunable much further risks breaking some driver behaviour.
Enjoy!
お楽しみ下さい
-ck
linux-4.8-ck8 patch:
patch-4.8-ck8.lrz
MuQSS by itself:
4.8-sched-MuQSS_144.patch
There are a small number of updates to MuQSS itself.
Notably there's an improvement in interactive mode when SMT nice is enabled and/or realtime tasks are running, or there are users of CPU affinity. Tasks previously would not schedule on CPUs when they were stuck behind those as the highest priority task and it would refuse to schedule them transiently.
The old hacks for CPU frequency changes from BFS have been removed, leaving the tunables to default as per mainline.
The default of 100Hz has been removed, but in its place a new and recommended 128Hz has been implemented - this just a silly microoptimisation to take advantage of the fast shifts that /128 has on CPUs compared to /100, and is close enough to 100Hz to behave otherwise the same.
For the -ck patch only I've reinstated updated and improved versions of the high resolution timeouts to improve behaviour of userspace that is inappropriately Hz dependent allowing low Hz choices to not affect latency.
Additionally by request I've added a couple of tunables to adjust the behaviour of the high res timers and timeouts.
/proc/sys/kernel/hrtimer_granularity_us
and
/proc/sys/kernel/hrtimeout_min_us
Both of these are in microseconds and can be set from 1-10,000. The first is how accurate high res timers will be in the kernel and is set to 100us by default (on mainline it is Hz accuracy).
The second is how small to make a request for a "minimum timeout" generically in all kernel code. The default is set to 1000us by default (on mainline it is one tick).
I doubt you'll find anything useful by tuning these but feel free to go nuts. Decreasing the second tunable much further risks breaking some driver behaviour.
Enjoy!
お楽しみ下さい
-ck
Labels:
-ck,
4.8,
cpufreq,
hyperthreading,
interactivity,
kernel,
latency,
linux,
MuQSS,
real-time,
scheduler,
sleep
Saturday, 29 October 2016
linux-4.8-ck5, MuQSS version 0.120
Announcing a new version of MuQSS and a -ck release to go with it in concert with mainline releasing 4.8.5
4.8-ck5 patchset:
http://ck.kolivas.org/patches/4.0/4.8/4.8-ck5/
MuQSS by itself for 4.8:
4.8-sched-MuQSS_120.patch
MuQSS by itself for 4.7:
4.7-sched-MuQSS_120.patch
Git tree:
https://github.com/ckolivas/linux
This is a fairly substantial update to MuQSS which includes bugfixes for the previous version, performance enhancements, new features, and completed documentation. This will likely be the first publicly announced version on LKML.
EDIT: Announce here: LKML
New features:
- MuQSS is now a tickless scheduler. That means it can maintain its guaranteed low latency even in a build configured with a low Hz tick rate. To that end, it is now defaulting to 100Hz, and it is recommended to use this as the default choice for it leads to more throughput and power savings as well.
- Improved performance for single threaded workloads with CPU frequency scaling.
- Full NoHZ now supported. This disables ticks on busy CPUs instead of just idle ones. Unlike mainline, MuQSS can do this virtually all the time, regardless of how many tasks are currently running. However this option is for very specific use cases (compute servers running specific workloads) and not for regular desktops or servers.
- Numerous other configuration options that were previously disabled from mainline are now allowed again (though not recommended for regular users.)
- Completed documentation can now be found in Documentation/scheduler/sched-MuQSS.txt
Bugfixes:
- Fix for the various stalls some people were still experiencing, along with the softirq pending warnings.
- Fix for some loss of CPU for heavily sched_yielding tasks.
- Fix for the BFQ warning (-ck only)
Enjoy!
お楽しみ下さい
-ck
4.8-ck5 patchset:
http://ck.kolivas.org/patches/4.0/4.8/4.8-ck5/
MuQSS by itself for 4.8:
4.8-sched-MuQSS_120.patch
MuQSS by itself for 4.7:
4.7-sched-MuQSS_120.patch
Git tree:
https://github.com/ckolivas/linux
This is a fairly substantial update to MuQSS which includes bugfixes for the previous version, performance enhancements, new features, and completed documentation. This will likely be the first publicly announced version on LKML.
EDIT: Announce here: LKML
New features:
- MuQSS is now a tickless scheduler. That means it can maintain its guaranteed low latency even in a build configured with a low Hz tick rate. To that end, it is now defaulting to 100Hz, and it is recommended to use this as the default choice for it leads to more throughput and power savings as well.
- Improved performance for single threaded workloads with CPU frequency scaling.
- Full NoHZ now supported. This disables ticks on busy CPUs instead of just idle ones. Unlike mainline, MuQSS can do this virtually all the time, regardless of how many tasks are currently running. However this option is for very specific use cases (compute servers running specific workloads) and not for regular desktops or servers.
- Numerous other configuration options that were previously disabled from mainline are now allowed again (though not recommended for regular users.)
- Completed documentation can now be found in Documentation/scheduler/sched-MuQSS.txt
Bugfixes:
- Fix for the various stalls some people were still experiencing, along with the softirq pending warnings.
- Fix for some loss of CPU for heavily sched_yielding tasks.
- Fix for the BFQ warning (-ck only)
Enjoy!
お楽しみ下さい
-ck
Monday, 24 October 2016
linux-4.8-ck4, MuQSS CPU scheduler v0.116
Yet another bugfix release for MuQSS and the -ck patchset with one of the most substantial latency fixes yet. Everyone should upgrade if they're on a previous 4.8 patchset of mine. Sorry about the frequency of these releases but I just can't allow a known buggy release be the latest version.
4.8-ck4 patchset:
http://ck.kolivas.org/patches/4.0/4.8/4.8-ck4/
MuQSS by itself for 4.8:
4.8-sched-MuQSS_116.patch
MuQSS by itself for 4.7:
4.7-sched-MuQSS_116.patch
I'm hoping this is the release that allows me to not push any more -ck versions out till 4.9 is released since it addresses all remaining issues that I know about.
A lingering bug that has been troubling me for some time was leading to occasional massive latencies and thanks to some detective work by Serge Belyshev I was able to narrow it down to a single line fix which dramatically improves worst case latency when measured. Throughput is virtually unchanged. The flow-on effect to other areas was also apparent with sometimes unused CPU cycles and weird stalls on some workloads.
Sched_yield was reverted to the old BFS mechanism again which GPU drivers prefer but it wasn't working previously on MuQSS because of the first bug. The difference is substantial now and drivers (such as nvidia proprietary) and apps that use it a lot (such as the folding @ home client) behave much better now.
The late introduced bugs that got into ck3/muqss115 were reverted.
The results come up quite well now with interbench (my latency under load benchmark) which I have recently updated and should now give sensible values:
https://github.com/ckolivas/interbench
If you're baffled by interbench results, the most important number is %deadlines met which should be as close to 100% as possible followed by max latency which should be as low as possible for each section. In the near future I'll announce an official new release version.
Pedro in the comments section previously was using runqlat from bcc tools to test latencies as well, but after some investigation it became clear to me that the tool was buggy and did not work properly with bfs/muqss either so I've provided a slightly updated version here which should work properly:
runqlat.py
Enjoy!
お楽しみ下さい
-ck
4.8-ck4 patchset:
http://ck.kolivas.org/patches/4.0/4.8/4.8-ck4/
MuQSS by itself for 4.8:
4.8-sched-MuQSS_116.patch
MuQSS by itself for 4.7:
4.7-sched-MuQSS_116.patch
I'm hoping this is the release that allows me to not push any more -ck versions out till 4.9 is released since it addresses all remaining issues that I know about.
A lingering bug that has been troubling me for some time was leading to occasional massive latencies and thanks to some detective work by Serge Belyshev I was able to narrow it down to a single line fix which dramatically improves worst case latency when measured. Throughput is virtually unchanged. The flow-on effect to other areas was also apparent with sometimes unused CPU cycles and weird stalls on some workloads.
Sched_yield was reverted to the old BFS mechanism again which GPU drivers prefer but it wasn't working previously on MuQSS because of the first bug. The difference is substantial now and drivers (such as nvidia proprietary) and apps that use it a lot (such as the folding @ home client) behave much better now.
The late introduced bugs that got into ck3/muqss115 were reverted.
The results come up quite well now with interbench (my latency under load benchmark) which I have recently updated and should now give sensible values:
https://github.com/ckolivas/interbench
If you're baffled by interbench results, the most important number is %deadlines met which should be as close to 100% as possible followed by max latency which should be as low as possible for each section. In the near future I'll announce an official new release version.
Pedro in the comments section previously was using runqlat from bcc tools to test latencies as well, but after some investigation it became clear to me that the tool was buggy and did not work properly with bfs/muqss either so I've provided a slightly updated version here which should work properly:
runqlat.py
Enjoy!
お楽しみ下さい
-ck
Friday, 21 October 2016
linux-4.8-ck2, MuQSS version 0.114
Announcing an updated version, and the first -ck release with MuQSS as the scheduler, officially retiring BFS from further development, in line with the diminished rate of bug reports with MuQSS. It is clear that the little attention BFS had received over the years apart from rushed synchronisation with mainline had cause a number of bugs to creep in and MuQSS is basically a rewritten evolution of the same code so it makes no sense to maintain both.
http://ck.kolivas.org/patches/4.0/4.8/4.8-ck2/
MuQSS version 0.114 by itself:
4.8-sched-MuQSS_114.patch
Git tree includes branches for MuQSS and -ck:
https://github.com/ckolivas/linux
In addition to the most up to date version of MuQSS replacing BFS, this is the first release with BFQ included. It is configurable and is set by default in -ck though it is entirely optional.
The MuQSS changes since 112 are as follows:
- Added cacheline alignment to atomic variables courtesy of Holger Hoffstätte
- Fixed PPC build courtesy of Serge Belyshev.
- Implemented wake lists for separate CPU packages.
- Send hotplug threads to CPUs even if they're not alive yet since they'll be enabling them.
- Build fixes for uniprocessor.
- A substantial revamp of the sub-tick process accounting, decreasing the number of variables used, simplifying the code, and increasing the resolution to nanosecond accounting. Now even tasks that run for less than 100us will not escape visible accounting.
This release should bring slightly better performance, more so on multi-cpu machines, and fairer accounting/latency.
Enjoy!
お楽しみ下さい
-ck
http://ck.kolivas.org/patches/4.0/4.8/4.8-ck2/
MuQSS version 0.114 by itself:
4.8-sched-MuQSS_114.patch
Git tree includes branches for MuQSS and -ck:
https://github.com/ckolivas/linux
In addition to the most up to date version of MuQSS replacing BFS, this is the first release with BFQ included. It is configurable and is set by default in -ck though it is entirely optional.
The MuQSS changes since 112 are as follows:
- Added cacheline alignment to atomic variables courtesy of Holger Hoffstätte
- Fixed PPC build courtesy of Serge Belyshev.
- Implemented wake lists for separate CPU packages.
- Send hotplug threads to CPUs even if they're not alive yet since they'll be enabling them.
- Build fixes for uniprocessor.
- A substantial revamp of the sub-tick process accounting, decreasing the number of variables used, simplifying the code, and increasing the resolution to nanosecond accounting. Now even tasks that run for less than 100us will not escape visible accounting.
This release should bring slightly better performance, more so on multi-cpu machines, and fairer accounting/latency.
Enjoy!
お楽しみ下さい
-ck
Tuesday, 18 October 2016
First MuQSS Throughput Benchmarks
The short version graphical summary:
Red = MuQSS 112 interactive off
Purple = MuQSS 112 interactive on
Blue = CFS
The detail:
http://ck.kolivas.org/patches/muqss/Benchmarks/20161018/
I went on a journey looking for meaningful benchmarks to conduct to assess the scalability aspect as far as I could on my own 12x machine and was really quite depressed to see what the benchmark situation on linux is like. Only the old and completely invalid benchmarks seem to still be hanging around in public sites and promoted, like Reaim, aim7, dbench, volanomark, etc. and none of those are useful scalability benchmarks. Even more depressing was the only ones with any reputation are actually commercial benchmarks costing hundreds of dollars.
This made me wonder out loud just how the heck mainline is even doing scalability improvements if there are precious few valid benchmarks for linux and no one's using them. The most promising ones, like mosbench, need multiple machines and quite a bit of set up to get them going.
I spent a day wading through the phoronix test suite - a site and its suite not normally known for meaningful high performance computing discussion and benchmarks - looking for benchmarks that could be used for meaningful results for multicore scalability assessment and were not too difficult to deploy and came up with the following collection:
John The Ripper - a CPU bound application that is threaded to the number of CPUs and intermittently drops to one thread making for slightly more interesting behaviour than just a fully CPU bound workload.
7-Zip Compression - a valid real world CPU bound application that is threaded but rarely able to spread out to all CPUs making it an interesting light load benchmark.
ebizzy - This emulates a heavy content delivery server load which scales beyond the number of CPUs and emulates what goes on between a http server and database.
Timed Linux Kernel Compilation - A perennial favourite because it is a real world case and very easy to reproduce. Despite numerous complaints about its validity as a benchmark, it is surprisingly consistent in its results and tests many facets of scalability, though does not scale to use all CPUs at all time either.
C-Ray - A ray tracing benchmark that uses massive threading per CPU and is completely CPU bound but overloads all CPUs.
Primesieve - A prime number generator that is threaded to the number of CPUs exactly, is fully CPU bound and is cache intensive.
PostgreSQL pgbench - A meaningful database benchmark that is done at 3 different levels - single threaded, normal loaded and heavily contended, each testing different aspects of scalability.
And here is a set of results comparing 4.8.2 mainline (labelled CFS), MuQSS 112 in interactive mode (MuQSS-int1) and MuQSS 112 in non-interactive mode (MuQSS-int0):
http://ck.kolivas.org/patches/muqss/Benchmarks/20161018/
It's worth noting that there is quite a bit of variance in these benchmarks and some are bordering on the difference being just noise. However there is a clear pattern here - when the load is light, in terms of throughput, CFS outperforms MuQSS. When load is heavy, the heavier it gets, MuQSS outperforms CFS, especially in non-interactive mode. As a friend noted, for the workloads where you wouldn't be running MuQSS in interactive mode, such as a web server, database etc, non-interactive mode is of clear performance benefit. So at least on the hardware I had available to me, on a 12x machine, MuQSS is scaling better than mainline on these workloads as load increases.
The obvious question people will ask is why MuQSS doesn't perform better at light loads, and in fact I have an explanation. The reason is that mainline tends to cling to processes much more so that if it is hovering at low numbers of active processes, they'll all cluster on one CPU or fewer CPUs than being spread out everywhere. This means the CPU benefits more from the turbo modes virtually all newer CPUs have, but it comes at a cost. The latency to tasks is greater because they're competing for CPU time on fewer busy CPUs rather than spreading out to idle cores or threads. It is a design decision in MuQSS, as taken from BFS, to always spread out to any idle CPUs if they're available, to minimise latency, and that's one of the reasons for the interactivity and responsiveness of MuQSS. Of course I am still investigating ways of closing that gap further.
Hopefully I can get some more benchmarks from someone with even bigger hardware, and preferably with more than one physical package since that's when things really start getting interesting. All in all I'm very pleased with the performance of MuQSS in terms of scalability on these results, especially assuming I'm able to maintain the interactivity of BFS which were my dual goals.
There is MUCH more to benchmarking than pure throughput of CPU - which is almost the only thing these benchmarks is checking - but that's what I'm interested in here. I hope that providing my list of easy to use benchmarks and the reasoning behind them can generate interest in some kind of meaningful standard set of benchmarks. I did start out in kernel development originally after writing and being a benchmarker :P
To aid that, I'll give simple instructions here for how to ~imitate the benchmarks and get results like I've produced above.
Download the phoronix test suite from here:
http://www.phoronix-test-suite.com/
The generic tar.gz is perfectly fine. Then extract it and install the relevant benchmarks like so:
Now obviously this is not ideal since you shouldn't run benchmarks on a multiuser login with Xorg and all sorts of other crap running so I actually always run benchmarks at init level 1.
Enjoy!
お楽しみ下さい
-ck
Red = MuQSS 112 interactive off
Purple = MuQSS 112 interactive on
Blue = CFS
The detail:
http://ck.kolivas.org/patches/muqss/Benchmarks/20161018/
I went on a journey looking for meaningful benchmarks to conduct to assess the scalability aspect as far as I could on my own 12x machine and was really quite depressed to see what the benchmark situation on linux is like. Only the old and completely invalid benchmarks seem to still be hanging around in public sites and promoted, like Reaim, aim7, dbench, volanomark, etc. and none of those are useful scalability benchmarks. Even more depressing was the only ones with any reputation are actually commercial benchmarks costing hundreds of dollars.
This made me wonder out loud just how the heck mainline is even doing scalability improvements if there are precious few valid benchmarks for linux and no one's using them. The most promising ones, like mosbench, need multiple machines and quite a bit of set up to get them going.
I spent a day wading through the phoronix test suite - a site and its suite not normally known for meaningful high performance computing discussion and benchmarks - looking for benchmarks that could be used for meaningful results for multicore scalability assessment and were not too difficult to deploy and came up with the following collection:
John The Ripper - a CPU bound application that is threaded to the number of CPUs and intermittently drops to one thread making for slightly more interesting behaviour than just a fully CPU bound workload.
7-Zip Compression - a valid real world CPU bound application that is threaded but rarely able to spread out to all CPUs making it an interesting light load benchmark.
ebizzy - This emulates a heavy content delivery server load which scales beyond the number of CPUs and emulates what goes on between a http server and database.
Timed Linux Kernel Compilation - A perennial favourite because it is a real world case and very easy to reproduce. Despite numerous complaints about its validity as a benchmark, it is surprisingly consistent in its results and tests many facets of scalability, though does not scale to use all CPUs at all time either.
C-Ray - A ray tracing benchmark that uses massive threading per CPU and is completely CPU bound but overloads all CPUs.
Primesieve - A prime number generator that is threaded to the number of CPUs exactly, is fully CPU bound and is cache intensive.
PostgreSQL pgbench - A meaningful database benchmark that is done at 3 different levels - single threaded, normal loaded and heavily contended, each testing different aspects of scalability.
And here is a set of results comparing 4.8.2 mainline (labelled CFS), MuQSS 112 in interactive mode (MuQSS-int1) and MuQSS 112 in non-interactive mode (MuQSS-int0):
http://ck.kolivas.org/patches/muqss/Benchmarks/20161018/
It's worth noting that there is quite a bit of variance in these benchmarks and some are bordering on the difference being just noise. However there is a clear pattern here - when the load is light, in terms of throughput, CFS outperforms MuQSS. When load is heavy, the heavier it gets, MuQSS outperforms CFS, especially in non-interactive mode. As a friend noted, for the workloads where you wouldn't be running MuQSS in interactive mode, such as a web server, database etc, non-interactive mode is of clear performance benefit. So at least on the hardware I had available to me, on a 12x machine, MuQSS is scaling better than mainline on these workloads as load increases.
The obvious question people will ask is why MuQSS doesn't perform better at light loads, and in fact I have an explanation. The reason is that mainline tends to cling to processes much more so that if it is hovering at low numbers of active processes, they'll all cluster on one CPU or fewer CPUs than being spread out everywhere. This means the CPU benefits more from the turbo modes virtually all newer CPUs have, but it comes at a cost. The latency to tasks is greater because they're competing for CPU time on fewer busy CPUs rather than spreading out to idle cores or threads. It is a design decision in MuQSS, as taken from BFS, to always spread out to any idle CPUs if they're available, to minimise latency, and that's one of the reasons for the interactivity and responsiveness of MuQSS. Of course I am still investigating ways of closing that gap further.
Hopefully I can get some more benchmarks from someone with even bigger hardware, and preferably with more than one physical package since that's when things really start getting interesting. All in all I'm very pleased with the performance of MuQSS in terms of scalability on these results, especially assuming I'm able to maintain the interactivity of BFS which were my dual goals.
There is MUCH more to benchmarking than pure throughput of CPU - which is almost the only thing these benchmarks is checking - but that's what I'm interested in here. I hope that providing my list of easy to use benchmarks and the reasoning behind them can generate interest in some kind of meaningful standard set of benchmarks. I did start out in kernel development originally after writing and being a benchmarker :P
To aid that, I'll give simple instructions here for how to ~imitate the benchmarks and get results like I've produced above.
Download the phoronix test suite from here:
http://www.phoronix-test-suite.com/
The generic tar.gz is perfectly fine. Then extract it and install the relevant benchmarks like so:
tar xf phoronix-test-suite-6.6.1.tar.gz
cd phoronix-test-suite
./phoronix-test-suite install build-linux-kernel c-ray compress-7zip ebizzy john-the-ripper pgbench primesieve
./phoronix-test-suite default-run build-linux-kernel c-ray compress-7zip ebizzy john-the-ripper pgbench primesieve
Now obviously this is not ideal since you shouldn't run benchmarks on a multiuser login with Xorg and all sorts of other crap running so I actually always run benchmarks at init level 1.
Enjoy!
お楽しみ下さい
-ck
Labels:
benchmark,
bfs,
interactivity,
kernel,
latency,
linux,
MuQSS,
scalability,
scheduler
Tuesday, 11 October 2016
MuQSS - The Multiple Queue Skiplist Scheduler v0.111
Lots of bugfixes, lots of improvements, build fixes, you name it.
For 4.8:
4.8-sched-MuQSS_111.patch
For 4.7:
4.7-sched-MuQSS_111.patch
And in a complete departure from BFS, a git tree (which suits constant development like this, unlike BFS's stable release massive ports):
https://github.com/ckolivas/linux
Look in the pending/ directory to see all the patches that went into this or read the git changelog. In particular numerous warnings were fixed, throughput improved compared to 108, SCHED_ISO was rewritten for multiple queues, potential races/crashes were addressed, and build fixes for different configurations were committed.
I haven't been able to track the bizarre latency issues reported by runqlat and when I try to reproduce it myself I get nonsense values of latency greater than the history of the earth so I suspect an interface bug with BPF reporting values. It doesn't seem to affect actual latency in any way.
EDIT: Updated to version 0.111 which has a fix for suspend/resume.
Enjoy!
お楽しみ下さい
-ck
For 4.8:
4.8-sched-MuQSS_111.patch
For 4.7:
4.7-sched-MuQSS_111.patch
And in a complete departure from BFS, a git tree (which suits constant development like this, unlike BFS's stable release massive ports):
https://github.com/ckolivas/linux
Look in the pending/ directory to see all the patches that went into this or read the git changelog. In particular numerous warnings were fixed, throughput improved compared to 108, SCHED_ISO was rewritten for multiple queues, potential races/crashes were addressed, and build fixes for different configurations were committed.
I haven't been able to track the bizarre latency issues reported by runqlat and when I try to reproduce it myself I get nonsense values of latency greater than the history of the earth so I suspect an interface bug with BPF reporting values. It doesn't seem to affect actual latency in any way.
EDIT: Updated to version 0.111 which has a fix for suspend/resume.
Enjoy!
お楽しみ下さい
-ck
Friday, 7 October 2016
MuQSS - The Multiple Queue Skiplist Scheduler v0.108
A new version of the MuQSS CPU scheduler
Incrementals and full patches available for 4.8 and 4.7 respectively here:
http://ck.kolivas.org/patches/muqss/4.0/4.8/
http://ck.kolivas.org/patches/muqss/4.0/4.7/
Yet more minor bugfixes and some important performance enhancements.
This version brings to the table the same locking scheme for trying to wake tasks up as mainline which is advantageous on process busy workloads and many CPUs. This is important because the main reason for moving to multiple runqueues was to minimise lock contention for the global runqueue lock that is in BFS (as mentioned here numerous times before) and this wake up scheme helps make the most of the multiple discrete runqueue locks.
Note this change is much more significant than the last releases so new instability is a possibility. Please report any problems or stacktraces!
There was a workload when I started out that I used lockstat to debug to get an idea of how much lock contention was going on and how long it lasted. Originally with the first incarnations of MuQSS on a 14 second benchmark with thousands of tasks on a 12x CPU it obtained 3 million locks and had almost 300k contentions with the longest contention lasting 80us. Now the same workload grabs the lock just 5k times with only 18 contentions in total and the longest lasted 1us.
This clearly demonstrates that the target endpoint for avoiding lock contention has been achieved. It does not translate into performance improvements on ordinary hardware today because you need ridiculous workloads on many CPUs to even begin deriving advantage from it. However as even our phones now have reached 8 logical CPUs, it will only be a matter of time before 16 threads appears on commodity hardware - a complaint that was directed at BFS when it came out 7 years ago but they still haven't appeared just yet. BFS was shown to be scalable for all workloads up to 16 CPUs, and beyond for certain workloads, but suffered dramatically for others. MuQSS now makes it possible for what was BFS to be useful much further into the future.
Again - MuQSS is aimed primarily at desktop/laptop/mobile device users for the best possible interactivity and responsiveness, and is still very simple in its approach to balancing workloads to CPUs so there are likely to be throughput workloads on mainline that outperform it, though there are almost certainly workloads where the opposite is true.
I've now addressed all planned changes to MuQSS and plan to hopefully only look at bug reports instead of further development from here on for a little while. In my eyes it is now stable enough to replace BFS in the next -ck release barring some unexpected showstopper bug appearing.
EDIT: If you blinked you missed the 107 announcement which was shortly superseded by 108.
EDIT2: Always watch the pending directory for updated pending patches to add.
http://ck.kolivas.org/patches/muqss/4.0/4.8/Pending/
Enjoy!
お楽しみ下さい
-ck
Incrementals and full patches available for 4.8 and 4.7 respectively here:
http://ck.kolivas.org/patches/muqss/4.0/4.8/
http://ck.kolivas.org/patches/muqss/4.0/4.7/
Yet more minor bugfixes and some important performance enhancements.
This version brings to the table the same locking scheme for trying to wake tasks up as mainline which is advantageous on process busy workloads and many CPUs. This is important because the main reason for moving to multiple runqueues was to minimise lock contention for the global runqueue lock that is in BFS (as mentioned here numerous times before) and this wake up scheme helps make the most of the multiple discrete runqueue locks.
Note this change is much more significant than the last releases so new instability is a possibility. Please report any problems or stacktraces!
There was a workload when I started out that I used lockstat to debug to get an idea of how much lock contention was going on and how long it lasted. Originally with the first incarnations of MuQSS on a 14 second benchmark with thousands of tasks on a 12x CPU it obtained 3 million locks and had almost 300k contentions with the longest contention lasting 80us. Now the same workload grabs the lock just 5k times with only 18 contentions in total and the longest lasted 1us.
This clearly demonstrates that the target endpoint for avoiding lock contention has been achieved. It does not translate into performance improvements on ordinary hardware today because you need ridiculous workloads on many CPUs to even begin deriving advantage from it. However as even our phones now have reached 8 logical CPUs, it will only be a matter of time before 16 threads appears on commodity hardware - a complaint that was directed at BFS when it came out 7 years ago but they still haven't appeared just yet. BFS was shown to be scalable for all workloads up to 16 CPUs, and beyond for certain workloads, but suffered dramatically for others. MuQSS now makes it possible for what was BFS to be useful much further into the future.
Again - MuQSS is aimed primarily at desktop/laptop/mobile device users for the best possible interactivity and responsiveness, and is still very simple in its approach to balancing workloads to CPUs so there are likely to be throughput workloads on mainline that outperform it, though there are almost certainly workloads where the opposite is true.
I've now addressed all planned changes to MuQSS and plan to hopefully only look at bug reports instead of further development from here on for a little while. In my eyes it is now stable enough to replace BFS in the next -ck release barring some unexpected showstopper bug appearing.
EDIT: If you blinked you missed the 107 announcement which was shortly superseded by 108.
EDIT2: Always watch the pending directory for updated pending patches to add.
http://ck.kolivas.org/patches/muqss/4.0/4.8/Pending/
Enjoy!
お楽しみ下さい
-ck
Labels:
4.8,
bfs,
interactivity,
kernel,
latency,
linux,
MuQSS,
scalability,
scheduler
Monday, 9 January 2012
Towards Transparent CPU Scheduling
Of BFS related interest is an excellent thesis by Joseph T. Meehean
entitled "Towards Transparent CPU Scheduling". Of particular note is
the virtually deterministic nature of BFS, especially in fairness and
latency. While this of course interests me greatly because of
extensive testing of the BFS CPU scheduler, there are many aspects of
both the current CFS CPU scheduler and the older O(1) CPU scheduler
that are discussed that anyone working on issues to do with
predictability, scalability, fairness and latency should read.
http://research.cs.wisc.edu/ wind/Publications/meehean- thesis11.html
We have made two advances in the area of applying scientific analysis to CPU schedulers. The first, CPU Futures, is a combination of predictive scheduling models embedded into the CPU scheduler and user-space controller that steers applications using feedback from these models. We have developed these predictive models for two different Linux schedulers (CFS and O(1)), based on two different scheduling paradigms (timesharing and proportional-share). Using three different case studies, we demonstrate that applications can use our predictive models to reduce interference from low-importance applications by over 70%, reduce web server starvation by an order of magnitude, and enforce scheduling policies that contradict the CPU scheduler's.
Harmony, our second contribution, is a framework and set of experiments for extracting multiprocessor scheduling policy from commodity operating systems. We used this tool to extract and analyze the policies of three Linux schedulers: O(1), CFS, and BFS. These schedulers often implement strikingly different policies. At the high level, the O(1) scheduler carefully selects processes for migration and strongly values processor affinity. In contrast, CFS continuously searches for a better balance and, as a result, selects processes for migration at random. BFS strongly values fairness and often disregards processor affinity.
entitled "Towards Transparent CPU Scheduling". Of particular note is
the virtually deterministic nature of BFS, especially in fairness and
latency. While this of course interests me greatly because of
extensive testing of the BFS CPU scheduler, there are many aspects of
both the current CFS CPU scheduler and the older O(1) CPU scheduler
that are discussed that anyone working on issues to do with
predictability, scalability, fairness and latency should read.
http://research.cs.wisc.edu/
Abstract:
In this thesis we propose using the scientific method to develop a deeper understanding of CPU schedulers; we use this approach to explain and understand the sometimes erratic behavior of CPU schedulers. This approach begins with introducing controlled workloads into commodity operating systems and observing the CPU scheduler's behavior. From these observations we are able to infer the underlying CPU scheduling policy and create models that predict scheduling behavior.We have made two advances in the area of applying scientific analysis to CPU schedulers. The first, CPU Futures, is a combination of predictive scheduling models embedded into the CPU scheduler and user-space controller that steers applications using feedback from these models. We have developed these predictive models for two different Linux schedulers (CFS and O(1)), based on two different scheduling paradigms (timesharing and proportional-share). Using three different case studies, we demonstrate that applications can use our predictive models to reduce interference from low-importance applications by over 70%, reduce web server starvation by an order of magnitude, and enforce scheduling policies that contradict the CPU scheduler's.
Harmony, our second contribution, is a framework and set of experiments for extracting multiprocessor scheduling policy from commodity operating systems. We used this tool to extract and analyze the policies of three Linux schedulers: O(1), CFS, and BFS. These schedulers often implement strikingly different policies. At the high level, the O(1) scheduler carefully selects processes for migration and strongly values processor affinity. In contrast, CFS continuously searches for a better balance and, as a result, selects processes for migration at random. BFS strongly values fairness and often disregards processor affinity.
Labels:
bfs,
deterministic,
fairness,
latency
Wednesday, 29 September 2010
Biasing for latency under load
One of the mantras of BFS is that it has very little in the way of tunables and should require no input on the part of the user to get both good latency and good throughput by default. The only tunable available is the rr_interval found in /proc/sys/kernel/ and turning it up will improve throughput at the expense of latency, while turning it down will do the opposite (iso_cpu is also there, but that's more a permission tunable than a scheduler behavioural tunable). Giving you the bottom line for those who want to tune, I suggest running 100Hz with an rr_interval of 300 if you are doing nothing but cpu intensive slow tasks (video encoding, folding etc) and running 1000Hz with an rr_interval of 2 if you care only for latency at all costs.
I've believed for a long time that it makes no sense to tune for ridiculously high loads on your machine if your primary use is a desktop, and if you get into an overload situation, you should expect a slowdown. Trying to tune for such conditions always ends up costing you in other ways that just isn't worth it since you spend 99.9999% of your time at "normal loads". What BFS does at high loads is a progressive lengthening of latency proportional to the load while maintaining relatively regular throughput. So if you run say a make -j4 on a quad core machine, you shouldn't really notice anything going on, but if you run a make -j64 you should notice a dramatic slowdown, and loss of fluid movement of your cursor and possibly have audio skip. What exactly is the point of doing a make -j64 on a quad core desktop? There is none apart from as some kind of mindless test. However on any busy server, it spends most of its time on loads much greater than 1 per CPU. In that setting maintaining reasonable latency, while ensuring maximal throughput is optimal.
The mainline kernel seems intent on continually readdressing latency under load on a desktop as though that's some holy grail. Lately the make -j10 load on uniprocessor workload has been used as the benchmark. What they're finding, not surprisingly, is that the lower you aim your latencies, the smoother the desktop will continue to feel at the higher loads, and trying to find some "optimum" value where latency will still be good without sacrificing throughput too much. Why 10? Why not 100? How about 1000? Why choose some arbitrary upper figure to tune to? Why not just accept that overload is overload and that latency is going to suffer and not damage throughput to try and contain it?
For my own response to this, here's a patch:
(edit, patch for bfs357)
bfs357-latency_bias.patch
The changelog, also in the patch itself, follows:
This is to achieve the converse of what normally happens. You can choose to tune to maintain either latency at high loads (set to 100), or throughput (set to 0 for current behaviour), or some value in between (set to 1 or more). So I'm putting this out there for people to test and report back to see if they think it's worthwhile.
I've believed for a long time that it makes no sense to tune for ridiculously high loads on your machine if your primary use is a desktop, and if you get into an overload situation, you should expect a slowdown. Trying to tune for such conditions always ends up costing you in other ways that just isn't worth it since you spend 99.9999% of your time at "normal loads". What BFS does at high loads is a progressive lengthening of latency proportional to the load while maintaining relatively regular throughput. So if you run say a make -j4 on a quad core machine, you shouldn't really notice anything going on, but if you run a make -j64 you should notice a dramatic slowdown, and loss of fluid movement of your cursor and possibly have audio skip. What exactly is the point of doing a make -j64 on a quad core desktop? There is none apart from as some kind of mindless test. However on any busy server, it spends most of its time on loads much greater than 1 per CPU. In that setting maintaining reasonable latency, while ensuring maximal throughput is optimal.
The mainline kernel seems intent on continually readdressing latency under load on a desktop as though that's some holy grail. Lately the make -j10 load on uniprocessor workload has been used as the benchmark. What they're finding, not surprisingly, is that the lower you aim your latencies, the smoother the desktop will continue to feel at the higher loads, and trying to find some "optimum" value where latency will still be good without sacrificing throughput too much. Why 10? Why not 100? How about 1000? Why choose some arbitrary upper figure to tune to? Why not just accept that overload is overload and that latency is going to suffer and not damage throughput to try and contain it?
For my own response to this, here's a patch:
(edit, patch for bfs357)
bfs357-latency_bias.patch
The changelog, also in the patch itself, follows:
Make it possible to maintain low latency as much as possible at high loads by shortening timeslice the more loaded the machine will get. Do so by adding a tunable latency_bias which is disabled by default. Valid values are from 0 to 100, where higher values mean bias more for latency as load increases. Note that this should still maintain fairness, but will sacrifice throughput, potentially dramatically, to try and keep latencies as low as possible. Hz will
still be a limiting factor so the higher Hz is, the lower the latencies maintainable.
The effect of enabling this tunable will be to ensure that very low CPU usage processes, such as mouse cursor movement, will remain fluid no matter how high the load is. It's possible to have a smooth mouse cursor with massive loads, but the effect on throughput can be up to a -20%- loss at ultra high loads. At meaningful loads, a value of one will have minimal impact on throughput and ensure that under the occasional overload condition the machine will still feel fluid.
This is to achieve the converse of what normally happens. You can choose to tune to maintain either latency at high loads (set to 100), or throughput (set to 0 for current behaviour), or some value in between (set to 1 or more). So I'm putting this out there for people to test and report back to see if they think it's worthwhile.
Subscribe to:
Posts (Atom)