[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: massive copy



--- On Tue, 14/4/09, Frank Bonnet <f.bonnet@esiee.fr> wrote:

> From: Frank Bonnet <f.bonnet@esiee.fr>
> Subject: Re: massive copy
> To: glynastill@yahoo.co.uk
> Cc: "Debian User List" <debian-user@lists.debian.org>
> Date: Tuesday, 14 April, 2009, 10:02 AM
> Glyn Astill wrote:
> > --- On Tue, 14/4/09, Frank Bonnet
> <f.bonnet@esiee.fr> wrote:
> > 
> >> From: Frank Bonnet <f.bonnet@esiee.fr>
> >> Subject: massive copy
> >> To: "Debian User List"
> <debian-user@lists.debian.org>
> >> Date: Tuesday, 14 April, 2009, 9:14 AM
> >> Hello
> >>
> >> I have to copy around 250 Gb from a server to a
> Netapp NFS
> >> server
> >> and I wonder what would be faster ?
> >>
> >> first solution
> >>
> >> cp -pr * /mnt/nfs/dir/
> >>
> >> second solution ( 26 cp processes running in // )
> >>
> >>
> >> for i in a b c d e f g h i j k l m n o p q r s t u
> v w x y
> >> z
> >> do
> >> cp -pr $i* /mnt/nfs/dir/ &
> >> done
> >>
> > 
> > Perhpas you could try some sort of tar pipe if
> you've got a nice cpu?
> > 
> > tar cf - * | (cd /mnt/nfs/dir/ ; tar xf - )
> > 
> 
> Yes the machine has nice CPUs and a lot of RAM
> do you think it will be faster using tar rather than cp ?
> 

I'd like to think it would help, if the files are quite compressable perhaps you could add a 'z' in there too...





Reply to: