Re: mounting tmpfs (and sysfs) defaultly in sarge?
At Wed, 24 Dec 2003 20:07:04 +0000,
Roger Leigh wrote:
> >> Please make sure that when this is added, you do default to a sensible
> >> size option. For example:
> >> none /dev/shm tmpfs defaults,size=500M 0 0
> > AFAIK, it's no problem. If we mount tmpfs without size= option,
> > kernel automatically limits a half of real memory size. In addition,
> > memory contents in tmpfs can swap out from memory to HDD. This is one
> > of nice feature of tmpfs. If we would like to avoid this size issue,
> > then we only specify size=1M (or 10M) for posix IPC.
> I've just tested this to be sure. It's half core memory.
> > If user specify size option in fstab explicitly, then it's user's
> > fault. /dev/shm is used for posix IPC, not ram based temporary
> > filesystem. If user wants to mount tmpfs for his temporary work, then
> > /dev/shm is not suitable place.
> Of course. I use it both on /dev/shm and /tmp. But the user may
> still allocate arbitrarily large amounts using POSIX shm (shm_open()
> et. al.), so users do have the potential to do this if they so choose.
> If e.g. a graphics application decides to share a whacking great
> pixbuf between processes, /dev/shm will be the backing store for that.
We know the same problem allocating large amounts of memory via POSIX
shm; users can also allocate a lot of memory via malloc() or mmap() or
sysv shm, ... if limit is set as unlimited, as you know.
Well, there is one difference between /dev/shm and other memory
allocation method: a user can put files onto /dev/shm, then memory is
BTW, think about Solaris /tmp, they use tmpfs for a long time. I
investigated RedHat9, I found they also don't set their default size.
So I don't concern about consuming memory by using /dev/shm.
> >> If you don't specify a default limit, any user can kill the system by
> >> creating a huge file in there. It's much nicer to get ENOSPC than a
> >> kernel panic. The installer could pick a sensible limit based on, for
> >> example, 20% of the combined core+swap size.
> > If system memory is exhausted, then (1) swap (2) OOM Killer (3)
> > ENOSPC. If we see kernel panic, then it's kernel bug.
> I [intentionally] killed my system without even so much as a panic(!)
> sometime last year. This was IIRC because I set too high a limit
> without the swap space to back it up. OTOH, this was with an early
> 2.4.x kernel which had other [VM] problems too.
Well, recent 2.4.x and 2.6 (which introduces mempool, and has much VM
improvements) seems more robust under high memory pressure.
> In summary, I think we can live with the default limits for most
> cases. However, for some systems these limits will be either
> 1.) Far too small. (See above--512 MiB /tmp + 2 GiB swap)
> 2.) Too large if > 1 tmpfs filesystem is mounted with default limits,
> with the potential to be exploited to down the system.
> (1) is a configuration issue which sysadmins will explicitly set up.
> (2) is a downside to the default limits however. Perhaps imposing
> even smaller default limits on the default size could help here--they
> can always be increased after the initial install.
I currently provide new file /etc/default/tmpfs, which contains:
# tmpfs max limit size for /dev/shm. You can specify tmpfs mount option
# format using this variable. In default, it doesn't specify any variables,
# and kernel automatically set valid upper limit.
System administrator can control /dev/shm size using TMPFS_SIZE
(e.g. TMPFS_SIZE=100m). So your point (1) and (2) can resolve by this
parameter. The reason which I don't fill this TMPFS_SIZE is: it's
difficult to decide the appropriate size.