[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Large file sizes (2+Gb)



On Tue, 15 Jan 2002, Cassandra Ludwig wrote:

> Now I have tried dumping via NFS (using windows NFS systems
> *shudder*), ftp, and even samba, but all of these drop dead at the 2gb
> limit.  Samba refuses to even try sending the file.

I am not entirely sure which system is running which OS.  If they are both
running Linux, you can do it, if one has Windows you may hit snags.

Upgrading your kernel to 2.4 and using ReiserFS will give you the 64-bit
filesystem, but your libc may not be compiled to support large files.  
You may have to upgrade or recompile libC (depending on what version you
have), and any statically linked applications that you would want to have
support for the large files.

Is your fileserver on Windows?  If that is the case you will have a
problem since Windows filesystems suffer the 2GB limit, as far as I know
(maybe recent NTFS does not, I know FATxx all do).  In the case of FTP and
Samba, both of these have a 2GB limit as well.

The simplest thing, might be to send the files across in 2GB chunks and
re-assemble them at the destination.  NFS is not a very good protocol for
file transfer so I would not suggest using that.  100GB of data would take
ages to send across.  Even 2GB could take a while.  But V3 NFS does
support large files.  V2 NFS does not.  If you have to try to do it with
Windows NT NFS support, I would not expect good results (it isn't even
very good at ordinary tasks).

You might be able to get the file across using HTTP, I do not believe
there is a size limit for that.  Other options, if you get desperate,
might include ZModem over telnet, IRC DCC, or even 20 lines of C code to
open a socket and just send the raw data without any regard to file size.

The problem though, is with the file transfer protocol, not with Linux. :}



Reply to: