[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: large files



--On Thursday, April 24, 2003 15:43:31 -0600 David Bishop <tech@bishop.dhs.org> wrote:

I have a user that really like to create files.  Then,
they don't clean them  up.  We have already put a quota*
on them, but unfortunetly, their directory  is so large
and convaluted, that they can't even figure out where all
the  disk space has gone.  Is there a sane way to
generate a report showing the  disk usage from a certain
point on down, sorted by size?  Heres kinda what I  mean:
for a standard user, I would just run
'du /u/foo | sort -n | tail -20', and tell them to clean
up whatever is there.   However, I've let a du | sort -n
run on this directory for over four hours,  before giving
up in disgust.  It is almost 100Gigs of files, with at
least  four or five directories that have 20K to 30K+
files each (plus hundreds of  other subdirs).  *And*,
it's on a filer, so there are .snapshot directories  that
du thinks it has to plow through, quintupling the amount
of work.   I'd  also like to make this into a weekly
report, so that they can make it part of  their Friday
routine (let's go delete 10 gigs of data! Woohoo!).

Ideas?  Other than killing them, of course, no matter how
tempting that is...


Maybe use find and restrict the search depth to at least find some large dirs, plus exclude the snapshots? Sth like "find /where/ever -size +10000k" for the larger files, and for large dirs maybe, haven't tried, "... -size +1k - type d". If you pipe this through awk you can easily sum it up. I remember once writing a script to reporting disk usage on a web server above the purchased amount.

Cheers, Marcel



Reply to: