[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#645052: kernel only recognizes 32G of memory



On Thu, 2011-10-13 at 07:05 +0100, Ian Campbell wrote:
> On Thu, 2011-10-13 at 03:27 +0100, Ben Hutchings wrote:
> > On Wed, 2011-10-12 at 14:58 +0100, Ian Campbell wrote:
> > > On Wed, 2011-10-12 at 14:11 +0100, Ben Hutchings wrote:
> > > > On Wed, 2011-10-12 at 08:26 +0100, Ian Campbell wrote:
> > > > > On Wed, 2011-10-12 at 08:46 +0300, Dmitry Musatov wrote:
> > > > > >  The config option XEN_MAX_DOMAIN_MEMORY controls how much memory a
> > > > > > Xen instance is seeing. The default for 64bit is 32GB, which is the
> > > > > > reason that m2.4xlarge Amazon EC2 instances only report this amount of
> > > > > > memory.
> > > > > >  Please set this limit to 70GB as there is a known restriction for
> > > > > > t1.micro instances at about 80GB.
> > > > > >  Similar bug exists and Ubuntu where it's already fixed
> > > > > > (https://bugs.launchpad.net/ubuntu/+source/linux-ec2/+bug/667796)
> > > > > 
> > > > > Is this the sort of change we can consider making in a stable update?
> > > > > I'm not at all sure, although my gut feeling is that it would be safe.
> > > > [...]
> > > > 
> > > > I think so.  But what is the trade-off?  There must be some reason why
> > > > this isn't set to however many TB the kernel can support.
> > > 
> > > It effects the amount of space set aside for the P2M table (the mapping
> > > of physical to machine addresses). In the kernel in Squeeze this space
> > > is statically reserved in BSS so increasing it will waste some more
> > > memory, according to the Kconfig comment it is 1 page per GB.
> > > 
> > > In a more up to date kernel the space comes from BRK and is reclaimed if
> > > it is not used, MAX_DOMAIN_MEMORY was bumped to default to 128G in the
> > > same change.
> > 
> > How intrusive is the change?  Could we reasonably backport it?
> 
> It was 58e05027b530 "xen: convert p2m to a 3 level tree" which I think
> is too big. IIRC there was a bunch of subsequent fixups to it as well,
> it was quite a subtle change.

You didn't directly answer the questions, but that sounds like 'fairly'
and 'no'.

If I understand correctly, the memory cost of expanding the table to
cover 70GB is (70GB - 32GB) * 4KB / 1GB = 156KB.  Is that right?

Since we don't have a specific flavour to support EC2, and since some
people like to run domains with much less memory, I'm inclined to say
that this is 'wontfix' for squeeze.  But I'm not sure just how small
they are likely to be (while still running Debian).  Maybe the cost
isn't that significant.

Ben.

-- 
Ben Hutchings
No political challenge can be met by shopping. - George Monbiot

Attachment: signature.asc
Description: This is a digitally signed message part


Reply to: