[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: shared memory in computational chemistry



Hi Francesco,

Francesco Pietra wrote:
Hi Peter:

Thanks. I set
echo "60000000" >/proc/sys/kernel/shmmax

because the segment to allocate was ca 39mb. In
between the rows reporting the steady approach to scf
energy, a row appeared:

1 warning: armci set_mem_offset: offset changed 0 to
8192

I am not sure if this means trouble, but as it is only a warning and as you said "completed successfully" I assume the results also made sense ;)


and the calculation of partial charges completed
successfully.

As to a permanent solution, is any drawback in setting
shmmax as above (or even to a higher value)
permanently? Probably this issue of allocating big
fragments will become routine, though I can't predict
their size.

I am sorry but I don't really know about the internals of shared memory in clusters.

I know that it is necessary to increase shmmax considerably to run Oracle (If there is need for a large block cache and for speed).

I also know that if the shared memory is not explicitly freed by the using process, it will not be freed automatically and has to be freed by the user by "ipcrm" or similar commands in order to free memory for other applications (or in order to restart the same application).

As long as your program frees the memory and you don't need the rest of the memory while your program is running you should be fine in my experience (from non cluster environments).

Best Regards,

Peter



Reply to: