[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Can Debian do multi-core "MY WAY"?



On Sun, 15 May 2016, Richard Owlett wrote:
> On 5/14/2016 3:50 PM, tomas@tuxteam.de wrote:
> >On Sat, May 14, 2016 at 12:31:28PM -0500, Richard Owlett wrote:
> >>I date from era when when "memory banks" were switched via contents
> >>of a I/O port ;/

Reminds me of the MSX memory mapper, and MSX paged slot selection...

> >>I envision
> >>   core A using memory range X
> >>   core B using memory range Z
> >
> >What you are describing is called "non uniform memory access" aka
> >NUMA [1] these days and yes, the Linux kernel takes into account
> >that different parts of memory have different "distances" to each
> >processor (e.g. by assigning process "affinities" to each CPU.
> >
> >To a lesser extent, CPU caches do this too.
> >
> >This is'nt surprising, since CPU bandwidth has outrun memory
> >bandwith significantly across the last 20-30 years. If a CPU
> >had to wait for every byte to arrive from main memory, they'd
> >be slower by a huge amount [2].
> >
> >So in some way the answer is; yes, your PC and your OS is probably
> >doing it already :-)
> >
> >regards
> >
> >[1] https://en.wikipedia.org/wiki/Non-Uniform_Memory_Access
> >[2] http://gameprogrammingpatterns.com/data-locality.html
> >
> 
> [1] Tells me I almost asked the question I thought I intended to ask ;/
> I'm a consumer of programs but not a programmer.
> A closer approximation to a correct _question_ might be "Can I as user tell
> OS to run program *1* and _only_ program *1* on core A which has exclusive
> use of memory range X? Everything else can use core B and memory range Y."
> [I'll probably have a better description after hitting send ;]

Yes, you can.  We call that "cpu pinning".  But the way it works for what
you want is that you need to pin everything else *out* of CPU A (including
interrupts), and pin the process (or interrupt, etc) you want to CPU A.  And
that's somewhat annoying to do.

In Debian (*nix, really), one usually doesn't have to bother with main
memory affinity unless one is running HPC workloads on a NUMA box.  SMP and
SMP+SMT boxes (i.e. non-NUMA) only care about cache-locality effects.  And
by the time you're bothering with cache affinity beyond what the kernel will
do automatically when you pin stuff to CPUs, you need to know about enough
low-level details about the platform, firmware and kernel, that it doesn't
make sense to bother even trying to explain in this ML.

Package "hwloc" helps with auto-detecting cache/node/processor topology and
locality, and has utilities to pin stuff to nodes, etc.

HPC (high-performance computing) tutorials and guides often explain a lot of
the underlying stuff, you might want to look for them.  High-throughput IO
(such as gigabit/10-gigabit software-based routing on many-core machines) is
another application that requires one to bother with these details (in this
specific case: kernel interrupt pinning, hardware capabilities for interrupt
routing/MSI-X, PCIe bus topology, etc).  Look for papers on these subjects,
they're fun to read and explain a lot of low level details.

Most often, it is enough to pin the process to CPU A, change its scheduling
priority to a very high one, and let the other processes move freely among
CPUs (the default).  If the process in CPU A is using the CPU all the time,
the task scheduler will keep everything else (but itself and some
interrupts) away from CPU A.

But really, unless your problem is actually *jitter* or any hard-realtime
tight semanthics, or high-throughput HPC workloads, you are unlikely to need
to mess with any of these.

nd if you are going to do tight hard-realtime, or you are worried about
sub-millisecond jitter, Debian is not likely to be the right tool for the
job.  We do high-througput IO and HPC workloads just fine, though :-)

> [2] gives valuable insight in how I should partition my disk when moving
> from Windows to Debian. It gives me some insight anyway.

Don't try to get everything right the first time.  Plan for a few
reinstalls as you learn, and it will be much easier.

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh


Reply to: