[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Making a 2>GB sized file.



Hello,

I just got your bug (http://bugs.debian.org/208431) requesting large
file support in mpich, and have two questions.  First, you mention early
in your original post that the compilation options you used to build
mpich still did not work; have you been able to make them work since?

Second, how (un)comfortable are other Debian Beowulfers with the 2.4
kernel requirement that compiling with these options would entail?  That
is, making this change would apparently force all mpich programs to run
with the 2.4 kernel; is that something people are comfortable with?

On Wed, 2003-08-27 at 07:31, Kyungwon Chun wrote:
    Peter Cordes wrote:
    
    >On Wed, Aug 27, 2003 at 04:26:56AM +0900, Kyungwon Chun wrote:
    >  
    >
    >>I made my new cluster using Sarge. The problem is that I can not treat
    >>a file bigger than 2 GB. I'm trying to make a file on NFS mounted
    >>directory, using mpich and hdf5. The error message is
    >>
    >>p15_4159: p4_error: : 1
    >>File locking failed in ADIOI_Set_lock. If the file system is NFS, you
    >>need to use NFS version 3 and mount the directory with the 'noac' option
    >>(no attribute caching).
    >>
    >>But, I'm mounted the directory using version 3 NFS with no attribute
    >>cache. (I found that the locking function is not work properly with the
    >>nfs-common package of Sarge. So, I used that one from the Sid by
    >>compiling from the source package.) I also tried the suggestion of HDF5
    >>installation manual i.e.
    >>adding the following compiler option when building the mpich package.
    >>
    >>-cflags="-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64"
    >>
    >>
    >>But, It seems that still not working. Is there any suggestion?
    >>    
    >>
    >
    > Can you try on an ext3 (or anything that isn't NFS) to see if the problem
    >is due to NFS?  I just created a > 2GB file on ext3, over NFS, using
    >dd if=/dev/zero of=bigfile bs=1024k count=2200
    >I'm appending to it with cat, and it's now up to 3.4GB.  Big files don't
    >seem to be a problem for NFS on Linux.  I'm using Linux 2.4.22 on the client
    >and server, with the NFS kernel server.  Maybe your problem is that lockd
    >isn't running on the server, or something like that?  Anyway, I don't think
    >the problem is just because of large files.
    >
    >  
    >
    I did the same test on my NFS mounted directory (dd if=/dev/zero
    of=bigfile bs=1024k count=2200). I could make 2.2GB file with this
    method without any problem. I also think that big files don't seem to be
    a problem for NFS on Linux. But, If I try to make >2GB file with MPICH,
    It make a problem. I also, check that the lockd daemon in running on the
    server. The other possible cause of the problem is HDF5 library. But,
    HDF5 library also works fine on a host filesystem. I checked this using
    the test programs in the source package and my own one. Now, I think the
    problem originated from MPICH package.
    
    The information of my system follows : [SNIP]
-- 
-Adam P.

GPG fingerprint: D54D 1AEE B11C CE9B A02B  C5DD 526F 01E8 564E E4B6

Welcome to the best software in the world today cafe!
http://lyre.mit.edu/~powell/The_Best_Stuff_In_The_World_Today_Cafe.ogg



Reply to: