[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: large files



On Thursday 24 April 2003 04:33 pm, Tarragon Allen wrote:
> On Fri, 25 Apr 2003 07:43 am, David Bishop wrote:
> > I have a user that really like to create files.  Then, they don't clean
> > them up.  We have already put a quota* on them, but unfortunetly, their
> > directory is so large and convaluted, that they can't even figure out
> > where all the disk space has gone.  Is there a sane way to generate a
> > report showing the disk usage from a certain point on down, sorted by
> > size?  Heres kinda what I mean:  for a standard user, I would just run
> > 'du /u/foo | sort -n | tail -20', and tell them to clean up whatever is
> > there. However, I've let a du | sort -n run on this directory for over
> > four hours, before giving up in disgust.  It is almost 100Gigs of files,
> > with at least four or five directories that have 20K to 30K+ files each
> > (plus hundreds of other subdirs).  *And*, it's on a filer, so there are
> > .snapshot directories that du thinks it has to plow through, quintupling
> > the amount of work.   I'd also like to make this into a weekly report, so
> > that they can make it part of their Friday routine (let's go delete 10
> > gigs of data! Woohoo!).
> >
> > Ideas?  Other than killing them, of course, no matter how tempting that
> > is...
> >
> > *100Gigs!
>
> I'd play with the --max-depth settings on du, this will allow you to limit
> the output a bit, however it will still have to run over the entire
> directory tree to count it. Failing that, if you suspect it's some really
> big files taking up the room then a find with -size +1000k or similar might
> be your friend.

You're gonna think I'm an idiot, but I read the man page on du probably 3 or 4 
times, and never saw the max_depth.  Thanks, I'll play with that.

-- 
MuMlutlitithtrhreeaadededd s siigngnatatuurere
D.A.Bishop



Reply to: