Thank you for contributing to the discussion. I agree that the
page I cited lacks a lot (I mentioned it), but unfortunately
that's the best one I could find that touches the subject.
Naturally a benchmark would be ideal, but again unfortunately I
don't have the resources to do one myself now. From a pure
technical POV, I think that the current choice -- a kernel tuned
up for desktop use -- is the one that should require a benchmark
to be adopted, but I get the message. On 2020-11-22 5:49 p.m., Noah Meyerhans
wrote:
On Sun, Nov 22, 2020 at 03:53:32PM -0800, Flavio Veloso Soares wrote:Unfortunately, I couldn't find many comprehensive benchmarks of kernel CONFIG_PREEMPT* options. The one at [1]https://www.codeblueprint.co.uk/2019/12/23/linux-preemption-latency-throughput.html seems to be very thorough, [...] Not particularly. I'm used to latency benchmarks showing e.g. average, 90th percentile, 99th percentile, as well as worst.I don't think Ben was talking about specific benchmarks. The web page you cites lacks basic measurements one would expect to see from *any* meaningful performance benchmark. Comparing maximum latency is fine, but it's not really relevant by itself. If a configuration change improves the worst case (100th percentile) but negatively impacts the 50th percentile, is that a change worth making? Maybe. But without having that data at all, the benchmark really isn't worth much at all. It's totally reasonable for us to consider making this change, but we should have comprehensive data about the impact of doing so. What impact does the change have on different classes of workloads? e.g. high tps, CPU-bound, IO-bound, etc. It's entirely possible that the proposed change improves performance under certain workloads, but negatively impacts others. Without knowing the impact in more in more detail, which would allow us to evaluate the tradeoffs, I don't think there's a compelling reason to make a change. noah -- FVS |