Re: Delete 4 million files
On 2009-03-19_01:56:26, Hal Vaughan wrote:
> On Mar 19, 2009, at 1:38 AM, Paul E Condon wrote:
>> On 2009-03-18_16:37:53, kj wrote:
>>> Hi guys,
>>> This might seem like a stupid question, but I'm hoping there's a
>>> better way.
>> running 'df' when you are curious about how much progress has been
>> made. I suggest 'lost+found' as a good choice for 'hide'.
> Won't this run into a problem with the infamous "argument list too long"
> response from rm that we've been talking about once it descends into the
> Or is there some reason it wouldn't?
No. 'rm' uses recursive descent, depth first. At no point in the
process is it doing anything but reading a directory-file and either
unlinking the from the inode, or descending into the directory file
that is at the inode, and ... If the tree is very deep there might be
a stack depth problem, but 1) this doesn't seem to be a very deep tree
and 2) I've done this on whole archive trees of 100GB and more. The
process can take a while, but it will terminate successfully, if there
is not a power failure ... And the push-down stack for tree traversal
is just a stack of inode numbers which surely can be kept in memory
for even the most monterous directory tree traversal!
I think OP was put off by not having some progress reports about
deletions in progress. It can take some time, because the OS updates
the link count in each inode, and most of these inodes have a link
count of just one when they are first 'statted', so it also has to put
the inode back into its free inode pool. All these steps must happen
nomatter what higher level software one tries to use.
The only thing that is faster than 'rm -rf' is to reformat the disk,
but I don't think OP is so desperate for speed that he will choose
that option. Or, maybe, there really is nothing else on the particular
disk in question...
Maybe OP should copy the other stuff to another disk, and then reformat.
Paul E Condon