[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Reasonable values for the -Xmx parameter ?



On Mon, Mar 8, 2010 at 3:34 PM, Vincent Fourmond <fourmond@gmail.com> wrote:
> Sylvestre Ledru wrote:
>> Le lundi 08 mars 2010 à 20:11 +0100, Vincent Fourmond a écrit :
>>>   I'm wondering really what could be a decent value for -Xmx parameter.
>>> I used to think that the lowest parameter that seem to let the program
>>> run for every arch is good, but I don't think anymore this is a good idea:
>>>
>>>   * the program might be clever enough to not allocate more memory than
>>> it can, while still being able to use significantly more to speedup (for
>>> instance through the use of caches)
>>>   * the tightest the memory, the more often the GC has to run, which
>>> could lead to performance penalties.
>>>
>>>   My question then is: is there a problem setting it to a value big
>>> enough (say, -Xmx1G) for a standard app ? (I'm thinking about freecol,
>>> that takes up quite a lot of memory) After all, most of the other
>>> programming languages don't limit memory by default, and the use of the
>>> ulimit shell builtin permits some fine tuning for this parameter (and
>>> others).
>>>
>>>   What do you think ?
>> I had previous bad experiences on setting more memory than available. It
>> leaded to unexpected crashes and I had to come back to 256m.
>> It is a really pity that it is still mandatory to specify it...

I never had that problem. The only problem I had was some old code
that was leaking file descriptors and that went undiagnosed because
with a smaller heap the garbage collected File instances had their OS
resources freed up correctly. Now, that was a bug and needed to be
fixed :-)  But aside from that, more heap had always made for a
happier JVM, but my workloads are batch processing of large corpora
and the like.

>  All right...
>
>  Then, maybe I could add a function to java-wrappers that would find
> out what is a 'good default' for that parameter, getting something more
> than the memory required but still reasonably less than the memory
> present on the machine ?

If you want to go in that direction, you might want to add also
pointer compression in AMD64 (-XX:+UseCompressedOops), which makes a
huge difference with respect to minimum heap sizes in AMD64. Otherwise
what it's OK for 32-bits might be way too little for 64 bits.

http://wikis.sun.com/display/HotSpotInternals/CompressedOops
http://java.sun.com/javase/6/webnotes/6u14.html

>  Would that be useful for anyone else than me ?

I find it difficult to imagine how would you automatically determine
the heap size based on the ideas you sketched in your earlier
comments. If what you propose actually solves the problem, then you
can actually write a patch for upstream to do that within the JVM :-)
:-)

In general, I would vouch for having some functionality that allows
the maintainers specify something in the lines of "this is the bare
minimum heap" and "this is the recommended heap" and let the
java-wrapper use one or the other depending on physical RAM and/or
some global configuration parameter in /etc (which would be a fine
place to add the compressed pointers flag). That of course is just a
wish for an ideal world ;-)

P.


Reply to: