[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Summary: Moving /tmp to tmpfs makes it useless



2012/6/10 Uoti Urpala wrote:

>> What false claim are you talking about?
>
> The problem is that you've posted quite a few of those false claims
[...]
> For example, the page you linked for your "SSDs can take 50 years
> of writing before they wear out" claim has a first paragraph saying
> durability IS again an issue

Yes, it is an issue for MLC SSD disks, that's why in summary I wrote
"SLC SSD disks". I even explicitly wrote that it depends on chip type.
That's why I gave that link, so people could check the type of SSD, get
to know the SLC/MLC difference, read about the calculation method (which
is valid for any SSD disk), and could decide whether they should worry.

Everything looks correct. No false claims there...

> As another example, this part from your FAQ is nonsense:
>> When you read from ext3, the oldest part of the filecache is dropped and
>> data is placed to RAM. But reading from swap means that your RAM is full,
>> and in order to read a page from swap you must first write another page
>> there. I.e. sequential read from ext3 turns into random write+read from
>> swap.
>
> There is no such difference reading from a normal filesystem or reading
> from swap. Iterating reads from swap can trigger writes, but if that's
> what you're referring to here, you've clearly either failed to
> understand what actually happens or are writing a very misleading
> description.

Maybe I've just poorly expressed the theory. Basically it boils down to:
  In case of "write large file then read it back" (which is very common
  temporary file usage scenario) on the reading stage instead of plain
  sequential read (as it'd be for ext3) you'll get read+write from swap.
Then I tried to explain:
  That's because on the write stage tmpfs was swapped out. The fact that it
  was swapped out means that the RAM is full, no free cache to use. And now
  when you start reading the file back, you need to read it from swap. But
  you cannot do that, because there's no free RAM. In order to read a page
  from swap you must first write another page of tmpfs there. That's why
  sequential read turns into random write+read from swap.
That's what I wrote in the summary... or at least tried to write.

When I was writing the summary it was just a theory, based on your email. I
have not done any tests then. When a few hours ago I did it I was surprised
how much true it was. It could be that my explanation is wrong, but test
cannot be wrong: every read did generated equal number of writes.

This actually means to me that as long as debian creates swap partition by
default it should never create large tmpfs mountpoints by default, or it
may badly affect SSD users.

If you don't have a better explanation, then why do you think that mine
was wrong? Of course if you do have a better explanation for results of
that test I'm also interested to read it.

> I think you'd normally start hitting the tmpfs size limit before the
> problematic behavior shown by the script would become a serious issue.

According to my theory the only thing you need to get the problem is
a file on tmpfs that is larger than free RAM. I.e. if you have 1GB RAM
and 600MB tmpfs (default for 2GB swap) you'll get swap reads+writes
even with 500MB file, if your gnome+firefox took 600MB and you have
less than 500MB RAM for cache.

-- 
  Serge


Reply to: