[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Kernel optimizations for small machines



Hello,

Small story for you:

I went today to a nice Free Software event near here (www.codejam.org)
featuring some cool celebrities that gave talks.

Among the others, Andrea Arcangeli has been talking about some really
cool patches he and others created to allow the VMM to scale up to
insane memory spaces.  He was talking about machines with 32Gb of RAM,
with future possibilities of needing to scale up to a terabyte or so in
64bit machines.

So, one of the questions I asked in the Q&A (besides suggesting to just
use a filesystem to allocate that memory, since that is bigger than the
size of my HARD DRIVE anyway) is if someone is trying to make some
patches to optimize machines with ram around 32Mb instead of 32Gb.

Andrea has said that, for one thing, it's now possible to leave out some
of these fantacomputing optimizations in case they impact too much
performance on small machines, and it's also possible to optimize
without swapping for really tiny embedded machines.

However, he's had ideas on how make things more efficient for computers
which need more memory than what they have (for example, a Gnome or KDE
desktop on a <256Mb ram).  One of his ideas was, in case of allocating
with all the physical memory full, instead of copying pages to swap
(which has a 2/3Mb/s bandwidth), to compress them and keep them in RAM
(which has a bandwidth that is order of magnitude faster, and has very
good compression ratios for the data usually found in-memory.

This would be supercool!  However, he said, there is little or no
commercial interest on anything like this, so he warned that we should
not expect to see this coming anytime soon unless we code it ourselves.

-- -- --

I thought this is worth sharing: appearently, there are ideas and
possibilities for a more efficient use of smaller hardware, but they are
not being investigated because people with the needed knowledge are
usually employed by companies which can always cope with memory full by
adding new ram.

Solution, however, exist (and compressing unused memory could be a very
good one), but they are not being investigated and implemented.

It could be interesting if more kernel hackers were made aware of the
interest for this, and it would be interesting if someone funded some
kernel hacker to also spend some time working on something like this.


Ciao,

Enrico

--
GPG key: 1024D/797EBFAB 2000-12-05 Enrico Zini <enrico@debian.org>

Attachment: signature.asc
Description: Digital signature


Reply to: