Discussion:
[OMPI users] mpicc and libstdc++, general building question
Christof Koehler
2017-04-07 17:11:50 UTC
Permalink
Hello everybody,

we are trying to build an appilcation with specific C++
requirements and for some reason we get the "wrong" libstdc++ library in
the end. At the moment the working hypothesis is that this might be
related to our OpenMPI built. I will try to explain below.

The application, which is C++/python, would like to have a compiler
which implements the C++14 standard. The system compiler on our cluster is
gcc 4.8.5 (Centos7) which is too old. We have a seperate gcc 6.3
installation which is available as a module and appears to be working
well, although I never tested C++ explicitely until now.

After loading the gcc 6.3 module I usually build my OpenMPI simply like this

./configure FFLAGS="-O1" CFLAGS="-O1" FCFLAGS="-O1" CXXFLAGS="-O1"
--with-psm2 --with-tm --with-hwloc=internal --enable-static
--enable-orterun-prefix-by-default

assuming that configure figures out the "right" compiler which I thought
it does based on observing

mpicc -v
Using built-in specs.
COLLECT_GCC=/cluster/comp/gcc/6.3.0/bin/gcc
COLLECT_LTO_WRAPPER=/cluster/comp/gcc/6.3.0/libexec/gcc/x86_64-pc-linux-gnu/6.3.0/lto-wrapper
Target: x86_64-pc-linux-gnu
Configured with: ../gcc-6.3.0/configure --prefix=/cluster/comp/gcc/6.3.0 --disable-multilib
Thread model: posix
gcc version 6.3.0 (GCC)

After that we build mpi4py (which as far as I can see is pure C
and not C++) with loaded gcc 6.3 and OpenMPI modules. Checking
the dynamic libraries of mpi4py with ldd we see
# ldd ./build/lib.linux-x86_64-2.7/mpi4py/MPI.so|grep libstdc
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002ab8b0c0b000)
which is obviously the systems (gcc 4.8.5) libstdc++, and that although
# echo $LD_LIBRARY_PATH
/cluster/comp/gcc/6.3.0/lib64:/cluster/comp/gcc/6.3.0/lib:/cluster/mpi/openmpi/2.0.2/gcc62/lib:/cluster/mpi/openmpi/2.0.2/gcc63/lib

On top, all OpenMPI libraries when checked with ldd (gcc 6.3 module
still loaded) reference the /usr/lib64/libstdc++.so.6 and not
/cluster/comp/gcc/6.3.0/lib64/libstdc++.so.6 which leads to the idea
that the OpenMPI installation might be the reason we have the
/usr/lib64/libstdc++.so.6 dependency in the mpi4py libraries as well.

What we would like to have is that libstdc++.so.6 resolves to the
libstdc++ provided by the gcc 6.3 compiler for the mpi4py, which would be
available in its installation directory, i.e. /cluster/comp/gcc/6.3.0/lib64/libstdc++.so.6.

So, am I missing options in my OpenMPI build ? Should I explicitely do a
./configure CC=/cluster/comp/gcc/6.3.0/bin/gcc CXX=/cluster/comp/gcc/6.3.0/bin/g++ ...
or similar ? Am I building it correctly with a gcc contained in a
separate module anyway ? Or do we have a problem with our ld configuration ?



Best Regards

Christof
Reuti
2017-04-07 20:38:00 UTC
Permalink
This post might be inappropriate. Click to display it.
Christof Koehler
2017-04-08 21:36:44 UTC
Permalink
Hello Reuti,

thank you very much for your interest and the information.

I found out what is causing this. It is the --enable-static option to
configure.

With --enable-static a "ldd *so*|grep libstdc++" in openmpi's lib
directory gives
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002b2b42b28000)
for all .so's in the case of 2.0.2 and a mixture of the system and the
gcc 6.3 libstdc++ in the case of 1.10.6.

Without --enable-static a "ldd *so*|grep libstdc++" shows that for
2.0.2 the libstdc++ is not referenced at all in the
openmpi libraries and for 1.10.6 only the gcc 6.3 provided libstdc++ is
referenced.

Note: I apparently do not have a libmpi_cxx.so.20,
only the mpicxx wrapper, I suppose because I do not use the
--enable-mpi-cxx configure option. For the 1.10.6 (--enable-mpi-cxx is
the default) I have a libmpi_cxx.so.1.1.3 which in case of --enable-static
references
libstdc++.so.6 => /usr/lib64/libstdc++.so.6
and without --enable-static references
libstdc++.so.6 => /cluster/comp/gcc/6.3.0/lib/../lib64/libstdc++.so.6

Testing further by building mpi4py (which is pure C by the way) I see that
the libsdtc++ is again not referenced at all
if --enable-static was not used during the openmpi 2.0.2 configure step. If
--enable-static had been used during openmpi 2.0.2 configure the mpi4py
libraries contain references to the libstdc++ libraries from the system.
Not using --enable-static during openmpi 1.10.6 configure yields a
mpi4py library which only references the gcc 6.3 libstdc++ (and not a
mixture as with --enable-static).

So the initial question is explained then.

However, is this the expected behaviour with the --enable-static configure
option ?

Best Regards

Christof
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
Post by Christof Koehler
[…]
On top, all OpenMPI libraries when checked with ldd (gcc 6.3 module
still loaded) reference the /usr/lib64/libstdc++.so.6 and not
/cluster/comp/gcc/6.3.0/lib64/libstdc++.so.6 which leads to the idea
that the OpenMPI installation might be the reason we have the
/usr/lib64/libstdc++.so.6 dependency in the mpi4py libraries as well.
What we would like to have is that libstdc++.so.6 resolves to the
libstdc++ provided by the gcc 6.3 compiler for the mpi4py, which would be
available in its installation directory, i.e. /cluster/comp/gcc/6.3.0/lib64/libstdc++.so.6.
So, am I missing options in my OpenMPI build ? Should I explicitely do a
./configure CC=/cluster/comp/gcc/6.3.0/bin/gcc CXX=/cluster/comp/gcc/6.3.0/bin/g++ ...
or similar ? Am I building it correctly with a gcc contained in a
separate module anyway ? Or do we have a problem with our ld configuration ?
$ ldd libmpi_cxx.so.20

libstdc++.so.6 => /home/reuti/local/gcc-6.2.0/lib64/../lib64/libstdc++.so.6 (0x00007f184d2e2000)
$ readelf -a libmpi_cxx.so.20

0x000000000000000f (RPATH) Library rpath: [/home/reuti/local/openmpi-2.1.0_gcc-6.2.0_shared/lib64:/home/reuti/local/gcc-6.2.0/lib64/../lib64]
0x000000000000001d (RUNPATH) Library runpath: [/home/reuti/local/openmpi-2.1.0_gcc-6.2.0_shared/lib64:/home/reuti/local/gcc-6.2.0/lib64/../lib64]
Can you check the order in your PATH and LD_LIBRARY_PATH – are they as expected when loading the module?
- -- Reuti
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org
iEYEARECAAYFAljn+KkACgkQo/GbGkBRnRq3wwCgkRkiaPyXBdAMHoABmFBfDevu
ftkAnR3gul9AnZL0qqb8vZg8zjJvIHtR
=M5Ya
-----END PGP SIGNATURE-----
Loading...