[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Possible abuse of dpkg-deb -z9 for xz compressed binary packages



On 09/25/2014 02:18 AM, Henrique de Moraes Holschuh wrote:
> On Thu, 25 Sep 2014, Thomas Goirand wrote:
>> On 09/02/2014 09:39 PM, Henrique de Moraes Holschuh wrote:
>>> For -z9, it is as bad as ~670MiB to
>>> compress, and ~65MiB to decompress.
>>
>> I'd say this really depends on what you do. For what I do (eg: OpenStack
>> packages), I don't see how 65MB could be a problem. I do compress with
>> -z9, and have no intention to change this, because it makes sense for
>> these packages, where the bottleneck for large deployments will more be
>> the network transfers than uncompressing on each individual nodes.
> 
> OTOH, using -z9 on datasets smaller than the -z8 dictionary size *is* a
> waste of memory

Exactly why should I care when there's all the chances in the world that
my users will have plenty of RAM?

These days, in a cloud deployment, a server with 64 GB of RAM is small,
and can be considered from the old generation (such amount of RAM cost
about 300 EUR w/o tax). 256 GB is quite common. And we're talking about
a few dozens of MB for decompressing. There's an 1k order of magnitude
difference here at least. So: I don't care what you call "a lot" of RAM:
for my application, that's not a lot, that's negligible.

Thomas


Reply to: