[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

shared memory in computational chemistry

My quantum mechanical computational software (running
on amd64 etch with 3700mb per node, total ram 16GB) is
implicitly using shared memory segments to speed up
transfer outside the kernel. It is unable to allocate
a 38731776bytes segment, and the computation dies.

In fact, command "ipcsl -l" returns"
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (pages) = 2097152
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 128
max semaphores per array = 250
max semaphores system wide = 32000
max ops per semop call = 32
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384

1) How to set shmmax in debian? 

2) Which is the upper limit for the hardware indicated
abobe, should a relationship to the hardware be? (in
fact, I can't predict the size of segments that will
be tried to be allocated in the future i the MD
simulations I am carrying out).

3)What else - if anything - should be set besides

Thanks for help. I could not get help from the
software people. Probably because they are used at big
business (supercomputer centers) being unfamiliar with
the small business I am presenting here.

francesco Pietra

Don't get soaked.  Take a quick peek at the forecast
with the Yahoo! Search weather shortcut.

Reply to: