[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Kernel 3.14.x bug? rm, mv root-owned files



The Wanderer wrote:
> By this, I meant that I think 'rm' should refuse permission to remove a
> particular hardlink to the file when there are multiple such hardlinks,
> just as I think it should when there is only one.

Hmm...  That would be a completely different operating model.  A valid
model could be constructed that operates differently.  But that
wouldn't be the file system behavior model that we have been using for
the last forty years.

> >   rm -f /tmp/testdir1/testfile1
> > 
> > That must work.  Right?  Because we have not actually deleted the
> > file.  Not yet anyway.  The file is still there.  The file hasn't
> > been modified at all.
> 
> I disagree that this "must work", in fact. I would say that this should
> not work, because...
>...
> I agree that this behavior must be consistent.
> 
> I simply believe that it should be consistent by refusing the deletion
> in both cases, rather than allowing it in both cases.

That is quite a harsh restriction.  It basically removes much of the
functionality that has been added by file links (hard links).  A lot
of things depend upon being able to make a hard link to a file and
then when the file is removed from the original directory it remains
behind in the other places the file is linked.  A lot of backup
programs rely upon this (BackupPC, many others).  A lot of deployment
programs (stow).  It is just one of those fundamental constructs that
has been there since the beginning that so many things rely upon.
Taking that away would cause a lot of lost functionality breakage.

> The only case I can see where this would seem to cause problems is in
> the handling of 'rm -f', i.e. overriding the permission flag. In that
> scenario, I don't think it would be unreasonable to consult
> file-ownership information (possibly including group information) to
> determine whether to allow the override.

The 'rm' command is a little odd for Unix in that it actually stats
the file before removing it and if the file is read-only then it
"helpfully" questions the user first for the verification.  All the
option flag -f does is avoid the stat and just attempt the unlink(2)
system call without asking.  That only an 'rm' command line thing.

If you try this from perl, python, ruby, others then there isn't any
user interaction at all.  Try this on a read-only file.

  touch testfile1
  chmod a-w testfile1
  ls -ldog testfile1
    -r--r--r-- 1 0 Jun 12 15:17 testfile1
  perl -e 'unlink("%s\n","$ARGV[0]") or die;' testfile1
  ls -ldog testfile1
    ls: cannot access testfile1: No such file or directory

Note that the rm question is a feature of rm and not unlink(2).
Programs unlinking files just unlink the file and there isn't any
questions asked about it.

There isn't really any concept in rm of overriding the permission
flag.  It just avoids being helpful in that case.  Helpful being
relative.

Also there is a race condition of sorts.  In rm when it stat(2)s the
file first it then proceeds to unlink(2) the file.  That is two
separate system calls.  In between those two calls it is possible to
chmod the file, or unlink the file, or other things to the file.  The
use of two system calls back to back is not atomic.  It is by its
nature a race with other actions.  It is possible to confuse the
command by changing things between.  Removing the stat(2) call with -f
does avoid that race by avoiding the stat entirely.  And speeds things
up too if there are a lot of files or if the files are over NFS or
other remote system.

> I'll admit that the ramifications of that (e.g, do we allow deletion by
> group members, or not?) would start to get complicated enough that
> trying to make them general and consistent might effectively involve a
> reimplementation of ACLs, though.

Have you ever used AFS?  The permissions are all ACL based.  And the
point I have here is that it is a different security model.  A
re-implementation using ACLs.

  http://docs.openafs.org/UserGuide/HDRWQ46.html

You might like that security model better.  It is different from the
classic Unix file system model.  The problem is that most software
expects the traditional Unix file system permisssions and often
requires modification to operate on an AFS system.  Or it did when
last I used AFS many years ago.

> I think this change (which was my original intent) to the proposed
> paradigm would eliminate the "last reference count is an open file
> descriptor" problem, because in that situation it could continue to work
> just the way it currently does: if the last open file descriptor is
> closed when there are no filesystem links to the file node, the file is
> removed. The only difference would be that the last filesystem link
> could be removed only by someone who has write permission to the file,
> rather than someone who has write permission to the directory containing
> the last filesystem link.

If it had been that way at the beginning when the file system
implemented the behavior then I would probably be fine with it.
Because then that would be the way that it is and has been.

But instead it was implemented otherwise.  And so now all of these
years later it is simply too late to have it any other way.  Too much
expects it to be the way that it was implemented rather than some
other way.  Too much water under the bridge to think about moving the
town to away from the river. :-)

> >> As such, it seems as if deleting a file *should* require write
> >> permission to that file.
> > 
> > I agree that it *feels* like a read-only file should never be
> > possible to be freed.  But there isn't any practical way to make that
> > happen.
> 
> I'm afraid I don't see why. (Barring the 'rm -f' handling issue
> mentioned above. It might be argued that permitting override of the
> permissions flag it itself the origin of this problem.)

I think the default 'rm' behavior of stat(2)'ing the file first and
asking the question has been the seed of a lot of misunderstanding
about things.  It seems to me that it is so different from the usual
Unix model of mostly silent operation being the normal mode.  I wonder
how the world would be different if rm had not had that behavior coded
into it?  Because -f is not an override.  It simply doesn't stat and
doesn't ask anything.

And most importantly none of the other utilities have that behavior.
So it isn't a convention.  It is only a convention of one single
solitary tool that happens to do this.  There isn't a second data
point to draw a line between them.  It is just one data point.
Every other program that unlink(2)s simply unlinks the file without
asking like the perl example I posted.

> What I specifically think should be possible, which doesn't seem to be
> possible under the apparent current paradigm, is a situation where there
> is a directory in which user A can create new files and modify those
> files but in which there is a file which user A cannot delete. Maybe
> that could be done through group permissions and the like, I haven't
> experimented, but I wouldn't expect that based on discussion so far.

It is possible to place the protected file in a subdirectory and
through different permissions of the subdirectory to prevent it being
removed.  But fundamentally if one needs files to be modified and
needs files not to be modified then the way to do that is to put them
in different directories.  Otherwise there are too many possible
problems.

And there are the Linux second extended file system attributes changed
with chattr and listed with lsattr.  But that requires root permission
taking it out of the toolbox for normal users.  And it doesn't work on
tmpfs for example.  So host just don't seem as useful to me.

In summary I don't disagree that a different file system model is
possible.  It just isn't what we have.

Bob

Attachment: signature.asc
Description: Digital signature


Reply to: