Re: Idea: mount /tmp to tmpfs depending on free space and RAM
On 11 June 2012 15:04, Josselin Mouette <email@example.com> wrote:
> Le lundi 11 juin 2012 à 14:53 +0100, Aneurin Price a écrit :
>> On 8 June 2012 12:04, Bjørn Mork <firstname.lastname@example.org> wrote:
>> > Any file system will run out of space given the broken applications
>> > mentioned in this thread.
>> It is not productive to redefine applications as 'broken' simply
>> because they do not conform to an arbitrary set of requirements that
>> you have just added, especially when you haven't even given any
>> indication of what you consider 'non-broken' behaviour.
> So your applications are not broken because they all have access to
> infinite storage?
> Your life must be so fantastic.
(Note that we are talking about applications which fail gracefully
when confronted with ENOSPC, but which are likely to do so more often
when the size of /tmp is restricted.)
In general, my applications assume that temporary files can be stored
in /tmp unless configured otherwise.
It would be possible of course for all applications to be enhanced to
look at the size of the data they may need to store, such that files
of (say) 1GB could be treated differently to files of 1kB. I'm not
sure what would happen in the case that the size is unknown to start
with, although perhaps it would be safest to assume that the size will
grow without bound (though of course this is likely to be untrue in
reality, unless perhaps the application is naively decompressing a
maliciously crafted archive).
It's not clear though what would be the preferred behaviour -
traditionally the best location for temporary files has always been
/tmp regardless of the size of those files, so applications being so
enhanced would need to come up with some new policy. It's this part
which is missing from the claims of brokenness - no indication of what
the claimant would consider 'correct'.
Furthermore, it's also not clear what the benefit would be - this
amounts to the conversion of essentially one line of code (in an
unknown but potentially large set of applications) into a fair bit
more, increasing the chance of bugs, to solve a problem which has not
existed in practice until now.
Your exaggeration of 'infinite storage' is disingenuous - in practice
it is an entirely reasonable assumption that an application operating
on a file will have *enough* storage to do so, as this has been the
case on all mainstream desktop systems for over a decade - bearing in
mind that mainstream desktop systems are exactly the ones which should
be targetted by *default* options. If it really doesn't have enough
storage, then what is the application supposed to do better? But if
the system really does have that space, but apportioned in such a way
that the application cannot use it when it needs to, then it is the
system which is at fault, not the application for not working out some
new implied but unspecified policy.