Discussion:
[OMPI users] Redusing libmpi.so size....
Mahesh Nanavalla
2016-10-28 06:17:58 UTC
Permalink
Hi all,

I am using openmpi-1.10.3.

openmpi-1.10.3 compiled for arm(cross compiled on X86_64 for openWRT
linux) libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux)
libmpi.so.12.0.3 size is 990.2KB.

can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for
arm.

Thanks&Regards,
Mahesh.N
Mahesh Nanavalla
2016-10-28 12:12:03 UTC
Permalink
Hi Gilles,

Thanks for reply....

i have configured as below for arm

./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
--disable-mpi-fortran* --enable-dlopen* --enable-shared --disable-vt
--disable-java --disable-libompitrace --disable-static

While i am running the using mpirun
am getting following errror..
***@OpenWrt:~# /usr/bin/mpirun --allow-run-as-root -np 1
/usr/bin/openmpiWiFiBulb
--------------------------------------------------------------------------
Sorry! You were supposed to get help about:
opal_init:startup:internal-failure
But I couldn't open the help file:
/home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt:
No such file or directory. Sorry!


kindly guide me...

On Fri, Oct 28, 2016 at 5:34 PM, Mahesh Nanavalla <
Hi Gilles,
Thanks for reply....
i have configured as below for arm
./configure --enable-orterun-prefix-by-default --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
--disable-mpi-fortran* --enable-dlopen* --enable-shared --disable-vt
--disable-java --disable-libompitrace --disable-static
While i am running the using mpirun
am getting following errror..
/usr/bin/openmpiWiFiBulb
--------------------------------------------------------------------------
opal_init:startup:internal-failure
No such file or directory. Sorry!
kindly guide me...
On Fri, Oct 28, 2016 at 4:36 PM, Gilles Gouaillardet <
Hi,
i do not know if you can expect same lib size on x86_64 and arm.
x86_64 uses variable length instructions, and since arm is RISC, i
assume instructions are fixed length, and more instructions are
required to achieve the same result.
also, 2.4 MB does not seem huge to me.
anyway, make sure you did not compile with -g, and you use similar
optimization levels on both arch.
you also have to be consistent with respect to the --disable-dlopen option
(by default, it is off, so all components are in
/.../lib/openmpi/mca_*.so. if you configure with --disable-dlopen, all
components are slurped into lib{open_pal,open_rte,mpi}.so,
and this obviously increases lib size.
depending on your compiler, you might be able to optimize for code
size (vs performance) with the appropriate flags.
last but not least, strip your libs before you compare their sizes.
Cheers,
Gilles
On Fri, Oct 28, 2016 at 3:17 PM, Mahesh Nanavalla
Post by Mahesh Nanavalla
Hi all,
I am using openmpi-1.10.3.
openmpi-1.10.3 compiled for arm(cross compiled on X86_64 for openWRT
linux)
Post by Mahesh Nanavalla
libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux)
libmpi.so.12.0.3 size is 990.2KB.
can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for
arm.
Thanks&Regards,
Mahesh.N
Jeff Squyres (jsquyres)
2016-10-28 13:39:42 UTC
Permalink
Post by Mahesh Nanavalla
i have configured as below for arm
./configure --enable-orterun-prefix-by-default --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++ --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt --disable-java --disable-libompitrace --disable-static
Note that there is a tradeoff here: --enable-dlopen will reduce the size of libmpi.so by splitting out all the plugins into separate DSOs (dynamic shared objects -- i.e., individual .so plugin files). But note that some of plugins are quite small in terms of code. I mention this because when you dlopen a DSO, it will load in DSOs in units of pages. So even if a DSO only has 1KB of code, it will use <page_size> of bytes in your running process (e.g., 4KB -- or whatever the page size is on your system).

On the other hand, if you --disable-dlopen, then all of Open MPI's plugins are slurped into libmpi.so (and friends). Meaning: no DSOs, no dlopen, no page-boundary-loading behavior. This allows the compiler/linker to pack in all the plugins into memory more efficiently (because they'll be compiled as part of libmpi.so, and all the code is packed in there -- just like any other library). Your total memory usage in the process may be smaller.

Sidenote: if you run more than one MPI process per node, then libmpi.so (and friends) will be shared between processes. You're assumedly running in an embedded environment, so I don't know if this factor matters (i.e., I don't know if you'll run with ppn>1), but I thought I'd mention it anyway.

On the other hand (that's your third hand, for those at home counting...), you may not want to include *all* the plugins. I.e., there may be a bunch of plugins that you're not actually using, and therefore if they are compiled in as part of libmpi.so (and friends), they're consuming space that you don't want/need. So the dlopen mechanism might actually be better -- because Open MPI may dlopen a plugin at run time, determine that it won't be used, and then dlclose it (i.e., release the memory that would have been used for it).

On the other (fourth!) hand, you can actually tell Open MPI to *not* build specific plugins with the --enable-dso-no-build=LIST configure option. I.e., if you know exactly what plugins you want to use, you can negate the ones that you *don't* want to use on the configure line, use --disable-static and --disable-dlopen, and you'll likely use the least amount of memory. This is admittedly a bit clunky, but Open MPI's configure process was (obviously) not optimized for this use case -- it's much more optimized to the "build everything possible, and figure out which to use at run time" use case.

If you really want to hit rock bottom on MPI process size in your embedded environment, you can do some experimentation to figure out exactly which components you need. You can use repeated runs with "mpirun --mca ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework names ("framework" = collection of plugins of the same type). This verbose output will show you exactly which components are opened, which ones are used, and which ones are discarded. You can build up a list of all the discarded components and --enable-mca-no-build them.
Post by Mahesh Nanavalla
While i am running the using mpirun
am getting following errror..
--------------------------------------------------------------------------
opal_init:startup:internal-failure
/home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt: No such file or directory. Sorry!
So this is really two errors:

1. The help message file is not being found.
2. Something is obviously going wrong during opal_init() (which is one of Open MPI's startup functions).

For #1, when I do a default build of Open MPI 1.10.3, that file *is* installed. Are you trimming the installation tree, perchance? If so, if you can put at least that one file back in its installation location (it's in the Open MPI source tarball), it might reveal more information on exactly what is failing.

Additionally, I wonder if shared memory is not getting setup right. Try running with "mpirun --mca shmem_base_verbose 100 ..." and see if it's reporting an error.
--
Jeff Squyres
***@cisco.com
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
Mahesh Nanavalla
2016-11-01 06:13:23 UTC
Permalink
Hi Jeff Squyres,

Thank you for your reply...

My problem is i want to *reduce library* size by removing unwanted plugin's.

Here *libmpi.so.12.0.3 *size is 2.4MB.

How can i know what are the* pluggin's *included to* build the*
*libmpi.so.12.0.3* and how can remove.

Thanks&Regards,
Mahesh N
On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
Post by Mahesh Nanavalla
i have configured as below for arm
./configure --enable-orterun-prefix-by-default --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
--disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
--disable-java --disable-libompitrace --disable-static
Note that there is a tradeoff here: --enable-dlopen will reduce the size
of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
shared objects -- i.e., individual .so plugin files). But note that some
of plugins are quite small in terms of code. I mention this because when
you dlopen a DSO, it will load in DSOs in units of pages. So even if a DSO
only has 1KB of code, it will use <page_size> of bytes in your running
process (e.g., 4KB -- or whatever the page size is on your system).
On the other hand, if you --disable-dlopen, then all of Open MPI's plugins
are slurped into libmpi.so (and friends). Meaning: no DSOs, no dlopen, no
page-boundary-loading behavior. This allows the compiler/linker to pack in
all the plugins into memory more efficiently (because they'll be compiled
as part of libmpi.so, and all the code is packed in there -- just like any
other library). Your total memory usage in the process may be smaller.
Sidenote: if you run more than one MPI process per node, then libmpi.so
(and friends) will be shared between processes. You're assumedly running
in an embedded environment, so I don't know if this factor matters (i.e., I
don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
On the other hand (that's your third hand, for those at home counting...),
you may not want to include *all* the plugins. I.e., there may be a bunch
of plugins that you're not actually using, and therefore if they are
compiled in as part of libmpi.so (and friends), they're consuming space
that you don't want/need. So the dlopen mechanism might actually be better
-- because Open MPI may dlopen a plugin at run time, determine that it
won't be used, and then dlclose it (i.e., release the memory that would
have been used for it).
On the other (fourth!) hand, you can actually tell Open MPI to *not* build
specific plugins with the --enable-dso-no-build=LIST configure option.
I.e., if you know exactly what plugins you want to use, you can negate the
ones that you *don't* want to use on the configure line, use
--disable-static and --disable-dlopen, and you'll likely use the least
amount of memory. This is admittedly a bit clunky, but Open MPI's
configure process was (obviously) not optimized for this use case -- it's
much more optimized to the "build everything possible, and figure out which
to use at run time" use case.
If you really want to hit rock bottom on MPI process size in your embedded
environment, you can do some experimentation to figure out exactly which
components you need. You can use repeated runs with "mpirun --mca
ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
names ("framework" = collection of plugins of the same type). This verbose
output will show you exactly which components are opened, which ones are
used, and which ones are discarded. You can build up a list of all the
discarded components and --enable-mca-no-build them.
Post by Mahesh Nanavalla
While i am running the using mpirun
am getting following errror..
/usr/bin/openmpiWiFiBulb
Post by Mahesh Nanavalla
------------------------------------------------------------
--------------
Post by Mahesh Nanavalla
opal_init:startup:internal-failure
No such file or directory. Sorry!
1. The help message file is not being found.
2. Something is obviously going wrong during opal_init() (which is one of
Open MPI's startup functions).
For #1, when I do a default build of Open MPI 1.10.3, that file *is*
installed. Are you trimming the installation tree, perchance? If so, if
you can put at least that one file back in its installation location (it's
in the Open MPI source tarball), it might reveal more information on
exactly what is failing.
Additionally, I wonder if shared memory is not getting setup right. Try
running with "mpirun --mca shmem_base_verbose 100 ..." and see if it's
reporting an error.
--
Jeff Squyres
For corporate legal information go to: http://www.cisco.com/web/
about/doing_business/legal/cri/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Mahesh Nanavalla
2016-11-01 06:15:46 UTC
Permalink
Hi all,

Thank you for your reply...

My problem is i want to *reduce library* size by removing unwanted plugin's.

Here *libmpi.so.12.0.3 *size is 2.4MB.

How can i know what are the* pluggin's *included to* build the*
*libmpi.so.12.0.3* and how can remove.

Thanks&Regards,
Mahesh N

On Tue, Nov 1, 2016 at 11:43 AM, Mahesh Nanavalla <
Post by Mahesh Nanavalla
Hi Jeff Squyres,
Thank you for your reply...
My problem is i want to *reduce library* size by removing unwanted plugin's.
Here *libmpi.so.12.0.3 *size is 2.4MB.
How can i know what are the* pluggin's *included to* build the*
*libmpi.so.12.0.3* and how can remove.
Thanks&Regards,
Mahesh N
On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
Post by Mahesh Nanavalla
i have configured as below for arm
./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
--disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
--disable-java --disable-libompitrace --disable-static
Note that there is a tradeoff here: --enable-dlopen will reduce the size
of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
shared objects -- i.e., individual .so plugin files). But note that some
of plugins are quite small in terms of code. I mention this because when
you dlopen a DSO, it will load in DSOs in units of pages. So even if a DSO
only has 1KB of code, it will use <page_size> of bytes in your running
process (e.g., 4KB -- or whatever the page size is on your system).
On the other hand, if you --disable-dlopen, then all of Open MPI's
plugins are slurped into libmpi.so (and friends). Meaning: no DSOs, no
dlopen, no page-boundary-loading behavior. This allows the compiler/linker
to pack in all the plugins into memory more efficiently (because they'll be
compiled as part of libmpi.so, and all the code is packed in there -- just
like any other library). Your total memory usage in the process may be
smaller.
Sidenote: if you run more than one MPI process per node, then libmpi.so
(and friends) will be shared between processes. You're assumedly running
in an embedded environment, so I don't know if this factor matters (i.e., I
don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
On the other hand (that's your third hand, for those at home
counting...), you may not want to include *all* the plugins. I.e., there
may be a bunch of plugins that you're not actually using, and therefore if
they are compiled in as part of libmpi.so (and friends), they're consuming
space that you don't want/need. So the dlopen mechanism might actually be
better -- because Open MPI may dlopen a plugin at run time, determine that
it won't be used, and then dlclose it (i.e., release the memory that would
have been used for it).
On the other (fourth!) hand, you can actually tell Open MPI to *not*
build specific plugins with the --enable-dso-no-build=LIST configure
option. I.e., if you know exactly what plugins you want to use, you can
negate the ones that you *don't* want to use on the configure line, use
--disable-static and --disable-dlopen, and you'll likely use the least
amount of memory. This is admittedly a bit clunky, but Open MPI's
configure process was (obviously) not optimized for this use case -- it's
much more optimized to the "build everything possible, and figure out which
to use at run time" use case.
If you really want to hit rock bottom on MPI process size in your
embedded environment, you can do some experimentation to figure out exactly
which components you need. You can use repeated runs with "mpirun --mca
ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
names ("framework" = collection of plugins of the same type). This verbose
output will show you exactly which components are opened, which ones are
used, and which ones are discarded. You can build up a list of all the
discarded components and --enable-mca-no-build them.
Post by Mahesh Nanavalla
While i am running the using mpirun
am getting following errror..
/usr/bin/openmpiWiFiBulb
Post by Mahesh Nanavalla
------------------------------------------------------------
--------------
Post by Mahesh Nanavalla
opal_init:startup:internal-failure
No such file or directory. Sorry!
1. The help message file is not being found.
2. Something is obviously going wrong during opal_init() (which is one of
Open MPI's startup functions).
For #1, when I do a default build of Open MPI 1.10.3, that file *is*
installed. Are you trimming the installation tree, perchance? If so, if
you can put at least that one file back in its installation location (it's
in the Open MPI source tarball), it might reveal more information on
exactly what is failing.
Additionally, I wonder if shared memory is not getting setup right. Try
running with "mpirun --mca shmem_base_verbose 100 ..." and see if it's
reporting an error.
--
Jeff Squyres
For corporate legal information go to: http://www.cisco.com/web/about
/doing_business/legal/cri/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Jeff Squyres (jsquyres)
2016-11-01 11:12:34 UTC
Permalink
Run ompi_info; it will tell you all the plugins that are installed.
Post by Mahesh Nanavalla
Hi Jeff Squyres,
Thank you for your reply...
My problem is i want to reduce library size by removing unwanted plugin's.
Here libmpi.so.12.0.3 size is 2.4MB.
How can i know what are the pluggin's included to build the libmpi.so.12.0.3 and how can remove.
Thanks&Regards,
Mahesh N
Post by Mahesh Nanavalla
i have configured as below for arm
./configure --enable-orterun-prefix-by-default --prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++ --host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers --disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt --disable-java --disable-libompitrace --disable-static
Note that there is a tradeoff here: --enable-dlopen will reduce the size of libmpi.so by splitting out all the plugins into separate DSOs (dynamic shared objects -- i.e., individual .so plugin files). But note that some of plugins are quite small in terms of code. I mention this because when you dlopen a DSO, it will load in DSOs in units of pages. So even if a DSO only has 1KB of code, it will use <page_size> of bytes in your running process (e.g., 4KB -- or whatever the page size is on your system).
On the other hand, if you --disable-dlopen, then all of Open MPI's plugins are slurped into libmpi.so (and friends). Meaning: no DSOs, no dlopen, no page-boundary-loading behavior. This allows the compiler/linker to pack in all the plugins into memory more efficiently (because they'll be compiled as part of libmpi.so, and all the code is packed in there -- just like any other library). Your total memory usage in the process may be smaller.
Sidenote: if you run more than one MPI process per node, then libmpi.so (and friends) will be shared between processes. You're assumedly running in an embedded environment, so I don't know if this factor matters (i.e., I don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
On the other hand (that's your third hand, for those at home counting...), you may not want to include *all* the plugins. I.e., there may be a bunch of plugins that you're not actually using, and therefore if they are compiled in as part of libmpi.so (and friends), they're consuming space that you don't want/need. So the dlopen mechanism might actually be better -- because Open MPI may dlopen a plugin at run time, determine that it won't be used, and then dlclose it (i.e., release the memory that would have been used for it).
On the other (fourth!) hand, you can actually tell Open MPI to *not* build specific plugins with the --enable-dso-no-build=LIST configure option. I.e., if you know exactly what plugins you want to use, you can negate the ones that you *don't* want to use on the configure line, use --disable-static and --disable-dlopen, and you'll likely use the least amount of memory. This is admittedly a bit clunky, but Open MPI's configure process was (obviously) not optimized for this use case -- it's much more optimized to the "build everything possible, and figure out which to use at run time" use case.
If you really want to hit rock bottom on MPI process size in your embedded environment, you can do some experimentation to figure out exactly which components you need. You can use repeated runs with "mpirun --mca ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework names ("framework" = collection of plugins of the same type). This verbose output will show you exactly which components are opened, which ones are used, and which ones are discarded. You can build up a list of all the discarded components and --enable-mca-no-build them.
Post by Mahesh Nanavalla
While i am running the using mpirun
am getting following errror..
--------------------------------------------------------------------------
opal_init:startup:internal-failure
/home/nmahesh/Workspace/ARM_MPI/openmpi/share/openmpi/help-opal-runtime.txt: No such file or directory. Sorry!
1. The help message file is not being found.
2. Something is obviously going wrong during opal_init() (which is one of Open MPI's startup functions).
For #1, when I do a default build of Open MPI 1.10.3, that file *is* installed. Are you trimming the installation tree, perchance? If so, if you can put at least that one file back in its installation location (it's in the Open MPI source tarball), it might reveal more information on exactly what is failing.
Additionally, I wonder if shared memory is not getting setup right. Try running with "mpirun --mca shmem_base_verbose 100 ..." and see if it's reporting an error.
--
Jeff Squyres
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Squyres
***@cisco.com
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
George Bosilca
2016-11-01 17:57:38 UTC
Permalink
Let's try to coerce OMPI to dump all modules that are still loaded after
MPI_Init. We are still having a superset of the needed modules, but at
least everything unnecessary in your particular environment has been
trimmed as during a normal OMPI run.

George.

PS: It's a shell script that needs ag to run. You need to provide the OMPI
source directory. You will get a C file (named tmp.c) in the current
directory that contain the code necessary to dump all active modules. You
will have to fiddle with the compile line to get it to work, as you will
need to specify both source and build header files directories. For the
sake of completeness here is my compile line

mpicc -o tmp -g tmp.c -I. -I../debug/opal/include -I../debug/ompi/include
-Iompi/include -Iopal/include -Iopal/mca/event/libevent2022/libevent
-Iorte/include -I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
-Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include -I../debug/
-lopen-rte -l open-pal
Post by Jeff Squyres (jsquyres)
Run ompi_info; it will tell you all the plugins that are installed.
On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla <
Hi Jeff Squyres,
Thank you for your reply...
My problem is i want to reduce library size by removing unwanted
plugin's.
Here libmpi.so.12.0.3 size is 2.4MB.
How can i know what are the pluggin's included to build the
libmpi.so.12.0.3 and how can remove.
Thanks&Regards,
Mahesh N
On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
Post by Mahesh Nanavalla
i have configured as below for arm
./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi" CC=arm-openwrt-linux-muslgnueabi-gcc
CXX=arm-openwrt-linux-muslgnueabi-g++ --host=arm-openwrt-linux-muslgnueabi
--enable-script-wrapper-compilers --disable-mpi-fortran --enable-dlopen
--enable-shared --disable-vt --disable-java --disable-libompitrace
--disable-static
Note that there is a tradeoff here: --enable-dlopen will reduce the size
of libmpi.so by splitting out all the plugins into separate DSOs (dynamic
shared objects -- i.e., individual .so plugin files). But note that some
of plugins are quite small in terms of code. I mention this because when
you dlopen a DSO, it will load in DSOs in units of pages. So even if a DSO
only has 1KB of code, it will use <page_size> of bytes in your running
process (e.g., 4KB -- or whatever the page size is on your system).
On the other hand, if you --disable-dlopen, then all of Open MPI's
plugins are slurped into libmpi.so (and friends). Meaning: no DSOs, no
dlopen, no page-boundary-loading behavior. This allows the compiler/linker
to pack in all the plugins into memory more efficiently (because they'll be
compiled as part of libmpi.so, and all the code is packed in there -- just
like any other library). Your total memory usage in the process may be
smaller.
Sidenote: if you run more than one MPI process per node, then libmpi.so
(and friends) will be shared between processes. You're assumedly running
in an embedded environment, so I don't know if this factor matters (i.e., I
don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
On the other hand (that's your third hand, for those at home
counting...), you may not want to include *all* the plugins. I.e., there
may be a bunch of plugins that you're not actually using, and therefore if
they are compiled in as part of libmpi.so (and friends), they're consuming
space that you don't want/need. So the dlopen mechanism might actually be
better -- because Open MPI may dlopen a plugin at run time, determine that
it won't be used, and then dlclose it (i.e., release the memory that would
have been used for it).
On the other (fourth!) hand, you can actually tell Open MPI to *not*
build specific plugins with the --enable-dso-no-build=LIST configure
option. I.e., if you know exactly what plugins you want to use, you can
negate the ones that you *don't* want to use on the configure line, use
--disable-static and --disable-dlopen, and you'll likely use the least
amount of memory. This is admittedly a bit clunky, but Open MPI's
configure process was (obviously) not optimized for this use case -- it's
much more optimized to the "build everything possible, and figure out which
to use at run time" use case.
If you really want to hit rock bottom on MPI process size in your
embedded environment, you can do some experimentation to figure out exactly
which components you need. You can use repeated runs with "mpirun --mca
ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
names ("framework" = collection of plugins of the same type). This verbose
output will show you exactly which components are opened, which ones are
used, and which ones are discarded. You can build up a list of all the
discarded components and --enable-mca-no-build them.
Post by Mahesh Nanavalla
While i am running the using mpirun
am getting following errror..
/usr/bin/openmpiWiFiBulb
Post by Mahesh Nanavalla
------------------------------------------------------------
--------------
Post by Mahesh Nanavalla
opal_init:startup:internal-failure
No such file or directory. Sorry!
1. The help message file is not being found.
2. Something is obviously going wrong during opal_init() (which is one
of Open MPI's startup functions).
For #1, when I do a default build of Open MPI 1.10.3, that file *is*
installed. Are you trimming the installation tree, perchance? If so, if
you can put at least that one file back in its installation location (it's
in the Open MPI source tarball), it might reveal more information on
exactly what is failing.
Additionally, I wonder if shared memory is not getting setup right. Try
running with "mpirun --mca shmem_base_verbose 100 ..." and see if it's
reporting an error.
--
Jeff Squyres
For corporate legal information go to: http://www.cisco.com/web/
about/doing_business/legal/cri/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Squyres
For corporate legal information go to: http://www.cisco.com/web/
about/doing_business/legal/cri/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Mahesh Nanavalla
2016-11-02 03:49:26 UTC
Permalink
HI George,
Thanks for reply,

By that above script ,how can i reduce* libmpi.so* size.
Post by George Bosilca
Let's try to coerce OMPI to dump all modules that are still loaded after
MPI_Init. We are still having a superset of the needed modules, but at
least everything unnecessary in your particular environment has been
trimmed as during a normal OMPI run.
George.
PS: It's a shell script that needs ag to run. You need to provide the OMPI
source directory. You will get a C file (named tmp.c) in the current
directory that contain the code necessary to dump all active modules. You
will have to fiddle with the compile line to get it to work, as you will
need to specify both source and build header files directories. For the
sake of completeness here is my compile line
mpicc -o tmp -g tmp.c -I. -I../debug/opal/include -I../debug/ompi/include
-Iompi/include -Iopal/include -Iopal/mca/event/libevent2022/libevent
-Iorte/include -I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
-Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include -I../debug/
-lopen-rte -l open-pal
On Tue, Nov 1, 2016 at 7:12 AM, Jeff Squyres (jsquyres) <
Post by Jeff Squyres (jsquyres)
Run ompi_info; it will tell you all the plugins that are installed.
On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla <
Hi Jeff Squyres,
Thank you for your reply...
My problem is i want to reduce library size by removing unwanted
plugin's.
Here libmpi.so.12.0.3 size is 2.4MB.
How can i know what are the pluggin's included to build the
libmpi.so.12.0.3 and how can remove.
Thanks&Regards,
Mahesh N
On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
Post by Mahesh Nanavalla
i have configured as below for arm
./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
--disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
--disable-java --disable-libompitrace --disable-static
Note that there is a tradeoff here: --enable-dlopen will reduce the
size of libmpi.so by splitting out all the plugins into separate DSOs
(dynamic shared objects -- i.e., individual .so plugin files). But note
that some of plugins are quite small in terms of code. I mention this
because when you dlopen a DSO, it will load in DSOs in units of pages. So
even if a DSO only has 1KB of code, it will use <page_size> of bytes in
your running process (e.g., 4KB -- or whatever the page size is on your
system).
On the other hand, if you --disable-dlopen, then all of Open MPI's
plugins are slurped into libmpi.so (and friends). Meaning: no DSOs, no
dlopen, no page-boundary-loading behavior. This allows the compiler/linker
to pack in all the plugins into memory more efficiently (because they'll be
compiled as part of libmpi.so, and all the code is packed in there -- just
like any other library). Your total memory usage in the process may be
smaller.
Sidenote: if you run more than one MPI process per node, then libmpi.so
(and friends) will be shared between processes. You're assumedly running
in an embedded environment, so I don't know if this factor matters (i.e., I
don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
On the other hand (that's your third hand, for those at home
counting...), you may not want to include *all* the plugins. I.e., there
may be a bunch of plugins that you're not actually using, and therefore if
they are compiled in as part of libmpi.so (and friends), they're consuming
space that you don't want/need. So the dlopen mechanism might actually be
better -- because Open MPI may dlopen a plugin at run time, determine that
it won't be used, and then dlclose it (i.e., release the memory that would
have been used for it).
On the other (fourth!) hand, you can actually tell Open MPI to *not*
build specific plugins with the --enable-dso-no-build=LIST configure
option. I.e., if you know exactly what plugins you want to use, you can
negate the ones that you *don't* want to use on the configure line, use
--disable-static and --disable-dlopen, and you'll likely use the least
amount of memory. This is admittedly a bit clunky, but Open MPI's
configure process was (obviously) not optimized for this use case -- it's
much more optimized to the "build everything possible, and figure out which
to use at run time" use case.
If you really want to hit rock bottom on MPI process size in your
embedded environment, you can do some experimentation to figure out exactly
which components you need. You can use repeated runs with "mpirun --mca
ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
names ("framework" = collection of plugins of the same type). This verbose
output will show you exactly which components are opened, which ones are
used, and which ones are discarded. You can build up a list of all the
discarded components and --enable-mca-no-build them.
Post by Mahesh Nanavalla
While i am running the using mpirun
am getting following errror..
/usr/bin/openmpiWiFiBulb
Post by Mahesh Nanavalla
------------------------------------------------------------
--------------
Post by Mahesh Nanavalla
opal_init:startup:internal-failure
No such file or directory. Sorry!
1. The help message file is not being found.
2. Something is obviously going wrong during opal_init() (which is one
of Open MPI's startup functions).
For #1, when I do a default build of Open MPI 1.10.3, that file *is*
installed. Are you trimming the installation tree, perchance? If so, if
you can put at least that one file back in its installation location (it's
in the Open MPI source tarball), it might reveal more information on
exactly what is failing.
Additionally, I wonder if shared memory is not getting setup right.
Try running with "mpirun --mca shmem_base_verbose 100 ..." and see if it's
reporting an error.
--
Jeff Squyres
For corporate legal information go to: http://www.cisco.com/web/about
/doing_business/legal/cri/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Squyres
For corporate legal information go to: http://www.cisco.com/web/about
/doing_business/legal/cri/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Gilles Gouaillardet
2016-11-02 04:28:59 UTC
Permalink
Did you strip the libraries already ?


the script will show the list of frameworks and components used by MPI
helloworld.

from that, you can deduce a list of components that are not required,
exclude them via the configure command line, and rebuild a trimmed Open MPI.

note this is pretty painful and incomplete. for example, the ompi/io
components are not explicitly required by MPI helloworld, but they are
required

if your app uses MPI-IO (e.g. MPI_File_xxx)

some more components might be dynamically required by realworld MPI app.


may i ask why you are focusing on reducing the lib size ?

reducing the lib size by excluding (allegedly) useless components is a
long and painful process, and you might end up having to debug

new problems by your own ...

as far as i am concerned, if a few MB libs is too big (filesystem ?
memory ?), i do not see how a real world application can even run on
your arm node


Cheers,


Gilles
Post by Mahesh Nanavalla
HI George,
Thanks for reply,
By that above script ,how can i reduce*libmpi.so* size.
Let's try to coerce OMPI to dump all modules that are still loaded
after MPI_Init. We are still having a superset of the needed
modules, but at least everything unnecessary in your particular
environment has been trimmed as during a normal OMPI run.
George.
PS: It's a shell script that needs ag to run. You need to provide
the OMPI source directory. You will get a C file (named tmp.c) in
the current directory that contain the code necessary to dump all
active modules. You will have to fiddle with the compile line to
get it to work, as you will need to specify both source and build
header files directories. For the sake of completeness here is my
compile line
mpicc -o tmp -g tmp.c -I. -I../debug/opal/include
-I../debug/ompi/include -Iompi/include -Iopal/include
-Iopal/mca/event/libevent2022/libevent -Iorte/include
-I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
-Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include
-I../debug/ -lopen-rte -l open-pal
On Tue, Nov 1, 2016 at 7:12 AM, Jeff Squyres (jsquyres)
Run ompi_info; it will tell you all the plugins that are installed.
On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla
Hi Jeff Squyres,
Thank you for your reply...
My problem is i want to reduce library size by removing
unwanted plugin's.
Here libmpi.so.12.0.3 size is 2.4MB.
How can i know what are the pluggin's included to build the
libmpi.so.12.0.3 and how can remove.
Thanks&Regards,
Mahesh N
On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres)
On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla
Post by Mahesh Nanavalla
i have configured as below for arm
./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc
CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi
--enable-script-wrapper-compilers --disable-mpi-fortran
--enable-dlopen --enable-shared --disable-vt --disable-java
--disable-libompitrace --disable-static
Note that there is a tradeoff here: --enable-dlopen will
reduce the size of libmpi.so by splitting out all the plugins
into separate DSOs (dynamic shared objects -- i.e., individual
.so plugin files). But note that some of plugins are quite
small in terms of code. I mention this because when you
dlopen a DSO, it will load in DSOs in units of pages. So even
if a DSO only has 1KB of code, it will use <page_size> of
bytes in your running process (e.g., 4KB -- or whatever the
page size is on your system).
On the other hand, if you --disable-dlopen, then all of Open
MPI's plugins are slurped into libmpi.so (and friends).
Meaning: no DSOs, no dlopen, no page-boundary-loading
behavior. This allows the compiler/linker to pack in all the
plugins into memory more efficiently (because they'll be
compiled as part of libmpi.so, and all the code is packed in
there -- just like any other library). Your total memory
usage in the process may be smaller.
Sidenote: if you run more than one MPI process per node,
then libmpi.so (and friends) will be shared between
processes. You're assumedly running in an embedded
environment, so I don't know if this factor matters (i.e., I
don't know if you'll run with ppn>1), but I thought I'd mention it anyway.
On the other hand (that's your third hand, for those at home
counting...), you may not want to include *all* the plugins.
I.e., there may be a bunch of plugins that you're not actually
using, and therefore if they are compiled in as part of
libmpi.so (and friends), they're consuming space that you
don't want/need. So the dlopen mechanism might actually be
better -- because Open MPI may dlopen a plugin at run time,
determine that it won't be used, and then dlclose it (i.e.,
release the memory that would have been used for it).
On the other (fourth!) hand, you can actually tell Open MPI
to *not* build specific plugins with the
--enable-dso-no-build=LIST configure option. I.e., if you
know exactly what plugins you want to use, you can negate the
ones that you *don't* want to use on the configure line, use
--disable-static and --disable-dlopen, and you'll likely use
the least amount of memory. This is admittedly a bit clunky,
but Open MPI's configure process was (obviously) not optimized
for this use case -- it's much more optimized to the "build
everything possible, and figure out which to use at run time"
use case.
If you really want to hit rock bottom on MPI process size in
your embedded environment, you can do some experimentation to
figure out exactly which components you need. You can use
repeated runs with "mpirun --mca ABC_base_verbose 100 ...",
where "ABC" is each of Open MPI's framework names ("framework"
= collection of plugins of the same type). This verbose
output will show you exactly which components are opened,
which ones are used, and which ones are discarded. You can
build up a list of all the discarded components and
--enable-mca-no-build them.
Post by Mahesh Nanavalla
While i am running the using mpirun
am getting following errror..
/usr/bin/openmpiWiFiBulb
--------------------------------------------------------------------------
Post by Mahesh Nanavalla
opal_init:startup:internal-failure
No such file or directory. Sorry!
1. The help message file is not being found.
2. Something is obviously going wrong during opal_init()
(which is one of Open MPI's startup functions).
For #1, when I do a default build of Open MPI 1.10.3, that
file *is* installed. Are you trimming the installation tree,
perchance? If so, if you can put at least that one file back
in its installation location (it's in the Open MPI source
tarball), it might reveal more information on exactly what is
failing.
Additionally, I wonder if shared memory is not getting setup
right. Try running with "mpirun --mca shmem_base_verbose 100
..." and see if it's reporting an error.
--
Jeff Squyres
http://www.cisco.com/web/about/doing_business/legal/cri/
<http://www.cisco.com/web/about/doing_business/legal/cri/>
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
<https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
<https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
--
Jeff Squyres
http://www.cisco.com/web/about/doing_business/legal/cri/
<http://www.cisco.com/web/about/doing_business/legal/cri/>
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
<https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
<https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
George Bosilca
2016-11-02 11:48:36 UTC
Permalink
Gilles is right, the script shows only what is used right after MPI_Init,
and it will disregard some of the less mainstreams type of modules, the
ones that are dynamically loaded as needed during the execution. It also
only shows only what is related to libmpi, and ignores everything related
to ORTE that is not in use inside the MPI library. However, it does allow
you to define a list of necessary modules, that you can then use during
configure to limit the size of your MPI library.

1. If your goal is to limit the size of the library for a limited set of
applications you can do the following. Instead of generating an app, use
the output of the script to generate a function. You can then link it with
your application(s). Calling the function right before your MPI_Finalize
will allow you to dump the entire list of used modules in your
application(s).

2. During configure use the option --enable-mca-no-build="list" to remove
all unnecessary modules from the build process. The configure will ignore
them, and therefore they will not endup in your libmpi.so

3. Some of the framework are dynamically selected for each communicator or
peer process (e.g. collective and BTL), so it might be difficult and error
prone to trim then down more.

George.
Post by Gilles Gouaillardet
Did you strip the libraries already ?
the script will show the list of frameworks and components used by MPI
helloworld.
from that, you can deduce a list of components that are not required,
exclude them via the configure command line, and rebuild a trimmed Open MPI.
note this is pretty painful and incomplete. for example, the ompi/io
components are not explicitly required by MPI helloworld, but they are
required
if your app uses MPI-IO (e.g. MPI_File_xxx)
some more components might be dynamically required by realworld MPI app.
may i ask why you are focusing on reducing the lib size ?
reducing the lib size by excluding (allegedly) useless components is a
long and painful process, and you might end up having to debug
new problems by your own ...
as far as i am concerned, if a few MB libs is too big (filesystem ? memory
?), i do not see how a real world application can even run on your arm node
Cheers,
Gilles
HI George,
Thanks for reply,
By that above script ,how can i reduce* libmpi.so* size.
Post by George Bosilca
Let's try to coerce OMPI to dump all modules that are still loaded after
MPI_Init. We are still having a superset of the needed modules, but at
least everything unnecessary in your particular environment has been
trimmed as during a normal OMPI run.
George.
PS: It's a shell script that needs ag to run. You need to provide the
OMPI source directory. You will get a C file (named tmp.c) in the current
directory that contain the code necessary to dump all active modules. You
will have to fiddle with the compile line to get it to work, as you will
need to specify both source and build header files directories. For the
sake of completeness here is my compile line
mpicc -o tmp -g tmp.c -I. -I../debug/opal/include -I../debug/ompi/include
-Iompi/include -Iopal/include -Iopal/mca/event/libevent2022/libevent
-Iorte/include -I../debug/opal/mca/hwloc/hwloc1113/hwloc/include
-Iopal/mca/hwloc/hwloc1113/hwloc/include -Ioshmem/include -I../debug/
-lopen-rte -l open-pal
On Tue, Nov 1, 2016 at 7:12 AM, Jeff Squyres (jsquyres) <
Post by Jeff Squyres (jsquyres)
Run ompi_info; it will tell you all the plugins that are installed.
On Nov 1, 2016, at 2:13 AM, Mahesh Nanavalla <
Hi Jeff Squyres,
Thank you for your reply...
My problem is i want to reduce library size by removing unwanted
plugin's.
Here libmpi.so.12.0.3 size is 2.4MB.
How can i know what are the pluggin's included to build the
libmpi.so.12.0.3 and how can remove.
Thanks&Regards,
Mahesh N
On Fri, Oct 28, 2016 at 7:09 PM, Jeff Squyres (jsquyres) <
On Oct 28, 2016, at 8:12 AM, Mahesh Nanavalla <
Post by Mahesh Nanavalla
i have configured as below for arm
./configure --enable-orterun-prefix-by-default
--prefix="/home/nmahesh/Workspace/ARM_MPI/openmpi"
CC=arm-openwrt-linux-muslgnueabi-gcc CXX=arm-openwrt-linux-muslgnueabi-g++
--host=arm-openwrt-linux-muslgnueabi --enable-script-wrapper-compilers
--disable-mpi-fortran --enable-dlopen --enable-shared --disable-vt
--disable-java --disable-libompitrace --disable-static
Note that there is a tradeoff here: --enable-dlopen will reduce the
size of libmpi.so by splitting out all the plugins into separate DSOs
(dynamic shared objects -- i.e., individual .so plugin files). But note
that some of plugins are quite small in terms of code. I mention this
because when you dlopen a DSO, it will load in DSOs in units of pages. So
even if a DSO only has 1KB of code, it will use <page_size> of bytes in
your running process (e.g., 4KB -- or whatever the page size is on your
system).
On the other hand, if you --disable-dlopen, then all of Open MPI's
plugins are slurped into libmpi.so (and friends). Meaning: no DSOs, no
dlopen, no page-boundary-loading behavior. This allows the compiler/linker
to pack in all the plugins into memory more efficiently (because they'll be
compiled as part of libmpi.so, and all the code is packed in there -- just
like any other library). Your total memory usage in the process may be
smaller.
Sidenote: if you run more than one MPI process per node, then
libmpi.so (and friends) will be shared between processes. You're assumedly
running in an embedded environment, so I don't know if this factor matters
(i.e., I don't know if you'll run with ppn>1), but I thought I'd mention it
anyway.
On the other hand (that's your third hand, for those at home
counting...), you may not want to include *all* the plugins. I.e., there
may be a bunch of plugins that you're not actually using, and therefore if
they are compiled in as part of libmpi.so (and friends), they're consuming
space that you don't want/need. So the dlopen mechanism might actually be
better -- because Open MPI may dlopen a plugin at run time, determine that
it won't be used, and then dlclose it (i.e., release the memory that would
have been used for it).
On the other (fourth!) hand, you can actually tell Open MPI to *not*
build specific plugins with the --enable-dso-no-build=LIST configure
option. I.e., if you know exactly what plugins you want to use, you can
negate the ones that you *don't* want to use on the configure line, use
--disable-static and --disable-dlopen, and you'll likely use the least
amount of memory. This is admittedly a bit clunky, but Open MPI's
configure process was (obviously) not optimized for this use case -- it's
much more optimized to the "build everything possible, and figure out which
to use at run time" use case.
If you really want to hit rock bottom on MPI process size in your
embedded environment, you can do some experimentation to figure out exactly
which components you need. You can use repeated runs with "mpirun --mca
ABC_base_verbose 100 ...", where "ABC" is each of Open MPI's framework
names ("framework" = collection of plugins of the same type). This verbose
output will show you exactly which components are opened, which ones are
used, and which ones are discarded. You can build up a list of all the
discarded components and --enable-mca-no-build them.
Post by Mahesh Nanavalla
While i am running the using mpirun
am getting following errror..
/usr/bin/openmpiWiFiBulb
Post by Mahesh Nanavalla
------------------------------------------------------------
--------------
Post by Mahesh Nanavalla
opal_init:startup:internal-failure
No such file or directory. Sorry!
1. The help message file is not being found.
2. Something is obviously going wrong during opal_init() (which is one
of Open MPI's startup functions).
For #1, when I do a default build of Open MPI 1.10.3, that file *is*
installed. Are you trimming the installation tree, perchance? If so, if
you can put at least that one file back in its installation location (it's
in the Open MPI source tarball), it might reveal more information on
exactly what is failing.
Additionally, I wonder if shared memory is not getting setup right.
Try running with "mpirun --mca shmem_base_verbose 100 ..." and see if it's
reporting an error.
--
Jeff Squyres
For corporate legal information go to: http://www.cisco.com/web/about
/doing_business/legal/cri/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Squyres
For corporate legal information go to: http://www.cisco.com/web/about
/doing_business/legal/cri/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Dave Love
2016-11-07 17:02:41 UTC
Permalink
Post by Mahesh Nanavalla
Hi all,
I am using openmpi-1.10.3.
openmpi-1.10.3 compiled for arm(cross compiled on X86_64 for openWRT
linux) libmpi.so.12.0.3 size is 2.4MB,but if i compiled on X86_64 (linux)
libmpi.so.12.0.3 size is 990.2KB.
can anyone tell how to reduce the size of libmpi.so.12.0.3 compiled for
arm.
Do what Debian does for armel?

du -h lib/openmpi/lib/libmpi.so.20.0.1
804K lib/openmpi/lib/libmpi.so.20.0.1

[What's ompi useful for on an openWRT system?]

Loading...