Re: Moving /tmp to tmpfs makes it useful
Excerpts from Ted Ts'o's message of 2012-05-25 18:56:55 -0700:
> On Fri, May 25, 2012 at 02:49:14PM +0100, Will Daniels wrote:
> > On 25/05/12 13:52, Ted Ts'o wrote:
> > >So what? If you write to a normal file system, it goes into the page
> > >cache, which is pretty much the same as writing into tmpfs. In both
> > >cases if you have swap configured, the data will get pushed to disk;
> > That's not at all the same, the page cache is more temporary, it's
> > getting flushed to disk pretty quick if memory is tight (presumably)
> > but in the same situation using tmpfs going to swap is surely going
> > to be more disruptive?
> There will be some, but really, not that much difference between going
> from tmpfs to swap compared to files written to a filesystem (in both
> cases the data is stored in the page cache, whether it's a tmpfs file
> or an ext2/3/4 or xfs or btrfs file) in many cases.
> The major difference is that tmpfs pages only get written out to swap
> when the system is under memory pressure. In contrast, pages which
> are backed by a filesystem will start being written to disk after 30
> seconds _or_ if the system is under memory pressure.
On laptops and other power sensitive devices, this is pretty critical.
Hypothetical: I have 2GB of RAM, and I want to watch a 50MB video file
on a connection that will take, say, 10 minutes to cache the whole thing
(and its a 10 minute video).
With a regular filesystem hosting /tmp, Every 30 seconds I will wake up
the hard disk, and write data to it. I doubt most spinning disks will
go to sleep in < 30 seconds, so this is more than 10 minutes solid of
hard disk spinning.
With tmpfs, there is no memory pressure, so my disk never even spins up
to write anything to it. If I do run into memory pressure, yes, I need
to use swap at that point. But at that point I've got a lot more than
just the disk draining power.