[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

pvbatch / openmpi refuses under chroot



Hi,
This week FreeFOAM-0.1.0 has been issued and I am attempting to create
the debian packages of it. During building the configuration scripts
check on the Paraview release number using 'pvbatch --version'. Pvbatch
is a program in the Paraview distribution and runs in parallel. On my
Debian/Unstable system there is no problem. But under chroot, using
pbuilder, the 'pvbatch --version' does not work. Its output then is:
Start of output 'pvbatch --version'
===================================
[hamburg:23826] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in
file ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 161
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_plm_base_select failed
  --> Returned value Not found (-13) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
[hamburg:23826] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in
file ../../../orte/runtime/orte_init.c at line 132
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_ess_set_name failed
  --> Returned value Not found (-13) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
[hamburg:23826] [[INVALID],INVALID] ORTE_ERROR_LOG: Not found in
file ../../../orte/orted/orted_main.c at line 325
[hamburg:23825] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a
daemon on the local node in
file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at
line 469
[hamburg:23825] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a
daemon on the local node in
file ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at
line 230
[hamburg:23825] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a
daemon on the local node in file ../../../orte/runtime/orte_init.c at
line 132
--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_ess_set_name failed
  --> Returned value Unable to start a daemon on the local node (-128)
instead of ORTE_SUCCESS
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or
environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  ompi_mpi_init: orte_init failed
  --> Returned "Unable to start a daemon on the local node" (-128)
instead of "Success" (0)
--------------------------------------------------------------------------
*** The MPI_Init() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[hamburg:23825] Abort before MPI_INIT completed successfully; not able
to guarantee that all other processes were killed!

End of output 'pvbatch --version'
===================================

It seems to be a problem in openmpi: a missing package or configuration
during the building process under chroot. The build dependencies are:
Build-Depends: cdbs, debhelper (>= 5.0.24), python-support, cmake, flex,
gawk, python, libreadline6-dev, zlib1g-dev, libscotch-dev,
libparmetis-dev, mpi-default-dev, mpi-default-bin, paraview (>= 3.8),
doxygen, asciidoc, xmlto, docbook-utils, dvipng, asymptote,
texlive-science, dblatex

Does anybody have an idea how to solve this problem (apart from
disabling the testing on the release number of Paraview)?

Thanks,
Gerber van der Graaf

Attachment: signature.asc
Description: This is a digitally signed message part


Reply to: