Discussion:
[OMPI users] Segmentation fault with openmpi-v2.0.1-134-g52bea1d on SuSE Linux
Siegmar Gross
2016-11-02 13:42:59 UTC
Permalink
Hi,

I have installed openmpi-v2.0.1-134-g52bea1d on my "SUSE Linux Enterprise
Server 12 (x86_64)" with Sun C 5.14 beta and gcc-6.2.0. Unfortunately,
I get an error when I run one of my programs.

loki spawn 149 ompi_info | grep -e "Open MPI:" -e "C compiler absolute:"
Open MPI: 2.0.2a1
C compiler absolute: /opt/solstudio12.5b/bin/cc
loki spawn 150 mpiexec -np 1 --host loki --slot-list 0:0-5,1:0-5 spawn_master

Parent process 0 running on loki
I create 4 slave processes

[loki:03941] sm_segment_attach: mca_common_sm_module_attach failure!
--------------------------------------------------------------------------
A system call failed during shared memory initialization that should
not have. It is likely that your MPI job will now either abort or
experience performance degradation.

Local host: loki
System call: open(2)
Error: No such file or directory (errno 2)
--------------------------------------------------------------------------
[loki:03941] *** Process received signal ***
[loki:03941] Signal: Segmentation fault (11)
[loki:03941] Signal code: Address not mapped (1)
[loki:03941] Failing at address: 0x8
[loki:03931] [[37095,0],0] ORTE_ERROR_LOG: Not found in file
../../openmpi-v2.0.1-134-g52bea1d/orte/orted/pmix/pmix_server_fence.c at line 186
[loki:03931] [[37095,0],0] ORTE_ERROR_LOG: Not found in file
../../openmpi-v2.0.1-134-g52bea1d/orte/orted/pmix/pmix_server_fence.c at line 186
--------------------------------------------------------------------------
At least one pair of MPI processes are unable to reach each other for
MPI communications. This means that no Open MPI device has indicated
that it can be used to communicate between these processes. This is
an error; Open MPI requires that all MPI processes be able to reach
each other. This error can sometimes be the result of forgetting to
specify the "self" BTL.

Process 1 ([[37095,2],0]) is on host: loki
Process 2 ([[37095,2],1]) is on host: unknown!
BTLs attempted: self sm tcp vader

Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

ompi_dpm_dyn_init() failed
--> Returned "Unreachable" (-12) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
loki spawn 151



The program works as expected, if I specify the hosts in the following way.

loki spawn 151 mpiexec -np 1 --host loki,loki,loki,nfs1,nfs1 spawn_master

Parent process 0 running on loki
I create 4 slave processes

Parent process 0: tasks in MPI_COMM_WORLD: 1
tasks in COMM_CHILD_PROCESSES local group: 1
tasks in COMM_CHILD_PROCESSES remote group: 4

Slave process 0 of 4 running on loki
Slave process 1 of 4 running on loki
spawn_slave 0: argv[0]: spawn_slave
spawn_slave 1: argv[0]: spawn_slave
Slave process 2 of 4 running on nfs1
spawn_slave 2: argv[0]: spawn_slave
Slave process 3 of 4 running on nfs1
spawn_slave 3: argv[0]: spawn_slave
loki spawn 152



I would be grateful, if somebody can fix the problem. Thank you
very much for any help in advance.


Kind regards

Siegmar

Loading...