[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Making a 2>GB sized file.



On Wed, Aug 27, 2003 at 04:26:56AM +0900, Kyungwon Chun wrote:
> 
> I made my new cluster using Sarge. The problem is that I can not treat
> a file bigger than 2 GB. I'm trying to make a file on NFS mounted
> directory, using mpich and hdf5. The error message is
> 
> p15_4159: p4_error: : 1
> File locking failed in ADIOI_Set_lock. If the file system is NFS, you
> need to use NFS version 3 and mount the directory with the 'noac' option
> (no attribute caching).
> 
> But, I'm mounted the directory using version 3 NFS with no attribute
> cache. (I found that the locking function is not work properly with the
> nfs-common package of Sarge. So, I used that one from the Sid by
> compiling from the source package.) I also tried the suggestion of HDF5
> installation manual i.e.
> adding the following compiler option when building the mpich package.
> 
> -cflags="-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
> 
> 
> But, It seems that still not working. Is there any suggestion?

 Can you try on an ext3 (or anything that isn't NFS) to see if the problem
is due to NFS?  I just created a > 2GB file on ext3, over NFS, using
dd if=/dev/zero of=bigfile bs=1024k count=2200
I'm appending to it with cat, and it's now up to 3.4GB.  Big files don't
seem to be a problem for NFS on Linux.  I'm using Linux 2.4.22 on the client
and server, with the NFS kernel server.  Maybe your problem is that lockd
isn't running on the server, or something like that?  Anyway, I don't think
the problem is just because of large files.

-- 
#define X(x,y) x##y
Peter Cordes ;  e-mail: X(peter@cor , des.ca)

"The gods confound the man who first found out how to distinguish the hours!
 Confound him, too, who in this place set up a sundial, to cut and hack
 my day so wretchedly into small pieces!" -- Plautus, 200 BC

Attachment: pgpFXZeXgq9xc.pgp
Description: PGP signature


Reply to: