Re: GIT for pdiff generation
>> Right now the source contents of unstable has, unpacked, 220MB. (Packed
>> gzip its 28MB, while the binary contents per have each have 18MB
>> packed).
> That should not be a problem in any non-joke box. Unless you'll run it
> in a memory-constrained vm or something.
Well. For our archives it is turned on in the main and backports one. I
dont think main will ever run in trouble there:
total used free shared buffers cached
Mem: 33006584 29241780 3764804 0 2343936 20783680
while backports isnt as big but still large enough:
total used free shared buffers cached
Mem: 8198084 7352164 845920 0 1063012 5650672
>> Lets add a safety margin: 350MB is a good guess for the largest.
>> A packages file nearly doesnt count compared to them, unpacked its just
>> some 34mb
> I.e. something very easy to keep in RAM on a "server class" or "desktop
> class" box.
Yes.
>> > Other than that, git loads entire objects to memory to manipulate them,
>> > which AFAIK CAN cause problems in datasets with very large files (the
>> > problem is not usually the size of the repository, but rather the size
>> > of the largest object). You probably want to test your use case with
>> > several worst-case files AND a large safety margin to ensure it won't
>> > break on us anytime soon, using something to track git memory usage.
>> Well, yes.
> At the sizes you explained now (I thought it would deal with objects 7GB
> in size, not 7GB worth of objects at most 0.5GB in size), it should not
> be a problem in any box with a reasonable ammount of free RAM and vm
> space (say, 1GB).
Right, could have written that better.
--
bye, Joerg
<liw> I'm a blabbermouth
Reply to: