> Consider a long running task, which will take days or weeks (which is > the norm in simulation and science domains in general). System > emitted a > warning after three days, that it'll delete my files in three days. > My > job won't be finished, and I'll be losing three days of work unless I > catch that warning. > > Now consider these tasks are run on (dark) servers, where users' > daemons > login to run the tasks but users do not. How can the user know? What > can > they do? Same can be said for long running daemons like mail servers, > CI > runners and such. > > One may argue that we can change the configuration, which is true. > You're making a strong argument here, indeed. I personally manage a horde of bookworm VMs which I really don't want to watch for precisely minute changes that break the system in subtle ways like this. Putting /tmp in RAM also won't work, as discussed here. Apart from SBCs, all other Debian use cases would not benefit from this. There's plenty of people using cloud virtual machines happily running Debian with 1 GB of ram, running clamav, mail servers, apache, etc, etc. Let's not eat up that ram further if we really don't have to. I know because I am such an user and certainly not the only one. I don't know how this would impact people who run a boatload of containers with Debian too. > On the other hand, if we need to change the configuration 99% of the > time, why are we making the change to a worse one in the first place? > There's been quite some debate here which is good. This sets us apart from corporate-run Linux in that there's technical democracy in decisions impacting users. Maybe listing and weighing, calmly, the pros and cons of this decision and how bad/good will the impact be on programs and users could help drive a decision. For instance, adopting this behavior: - aligns us with upstream (neutral in my opinion) - prevents clutter in /var/tmp by misbehaving applications mainly (users filling up their drive is their fault; they can still dd of=/ anyway; we shouldn't put baby wheels on our OS) (good) - might surprise users who got used to /var/tmp as being a scratch space since a long time ago and might cause frustration if their files randomly disappear. Yes, they shouldn't be storing files there anyway, but deleting them without them explicitly setting that mechanism up or hitting Y on a big prompt isn't helpful. (regression) - might require updates to older applications which wrongly use /var/tmp as discussed above. (neutral, that wasn't a sturdy mechanism anyway) Thanks, Alexandru
Attachment:
signature.asc
Description: This is a digitally signed message part