[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: /tmp as tmpfs and consequence for imaging software



On Sun, 13 Nov 2011, Bastien ROUCARIES <roucaries.bastien@gmail.com> wrote:
> Ok could we made some policy about /tmp use ? Like do not create file
> above 10M ? And fill RC bug if the apps do this ?

10M is small by today's standards.

On Sun, 13 Nov 2011, Bastien ROUCARIES <roucaries.bastien@gmail.com> wrote:
> We could not increase tmpfs over 50% to
> 70% of physical ram without deadlock (OOM and so on).

Can you substantiate this claim about a deadlock?  I've used a tmpfs that was 
larger than physical RAM in the past without problems.

As for an OOM, that's just a matter of whether all the various uses of memory 
exceed RAM+swap.  For certain usage patterns more swap plus a large tmpfs work 
well.

There was one time when I had to load a database from a dump file and it was 
unreasonably slow.  As the process was something I could repeat I put the 
database on a tmpfs and then moved it to a regular filesystem after it 
completed the load which saved hours.

On Sun, 13 Nov 2011, Carlos Alberto Lopez Perez <clopez@igalia.com> wrote:
> When the system is swapping heavily, if you are not using a preempt
> kernel the hole system will become so unresponsive while the swapping
> process is taking place that even your mouse pointer will stop moving.
> And Debian kernel is no preempt.

Ben has already described how the preemptive kernel patch doesn't affect this.

But there are many corner cases where disk IO performance can suffer.  One 
that hit me a few times recently was moving big files from a USB flash device 
to an NFS server.  When a workstation had a bunch of programs running that use 
all RAM and a fair bit of swap the command "mv /mnt/usb/* /mnt/nfs" would 
cause terrible performance even up to interfering with the mouse pointer.  
Really it shouldn't cache the file that's being read (because it will be 
unlinked) and it shouldn't cache the file that's being written because the NFS 
server should be faster than the USB device (so no write caching) and because 
it probably wouldn't make sense to cache for reading.

On Sun, 13 Nov 2011, Bastien ROUCARIES <roucaries.bastien@gmail.com> wrote:
> For instance using gscan2pdf on 60pages document create more than 1.2G
> of image file under /tmp and crash du to missing space.

I don't think that you can count on ANY filesystem having more than 1.2G of 
free space on a random system.  I run systems with less space than that on 
/home and systems with less space on / .  I think that sometimes the user just 
needs to know what they are doing, if you do something that creates a big file 
you need to ensure that there is enough space.

As an aside, I give some users their own filesystem under /home so that WHEN 
(not if) they use up all available space they don't cause problems for other 
people.  No matter how much space you provide there are people who can waste 
it all.  I also occasionally give some users their own filesystem under /home 
so that they will be unaffected by other users wasting space.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/


Reply to: