Discussion:
[OMPI users] How to specify the use of RDMA?
Rodrigo Escobar
2017-03-20 06:45:28 UTC
Permalink
Hi,
I have trying to run the Intel IMB benchmarks to compare the performance of
Infiniband (IB) vs Ethernet. However, I am not seeing any difference in
performance even for communication intensive benchmarks, such as alltoallv.

Each one of my machines has one ethernet interface and an infiniband
interface. I use the following command to run the alltoallv benchmark:
mpirun --mca btl self,openib,sm -hostfile hosts_ib IMB-MPI1 alltoallv

The hosts_ib file contains the IP addresses of the infiniband interfaces,
but the performance is the same when I deactivate the IB interfaces and use
my hosts_eth file which has the IP addresses of the ethernet interfaces. Am
I missing something? What is really happening when I specify the openib btl
if I am using the ethernet network?

Thanks
Gilles Gouaillardet
2017-03-20 13:29:19 UTC
Permalink
You will get similar results with hosts_ib and hosts_eth

If you want to use tcp over ethernet, you have to
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include eth0 ...
If you want to use tcp over ib, then
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include ib0 ...

Keep in mind that IMB calls MPI_Init_thread(MPI_THREAD_MULTIPLE)
this is not only unnecessary here, but it also has an impact on performances (with older versions, Open MPI felt back on IPoIB,
with v2.1rc the impact should be minimal)

If you simply
mpirun --mca btl tcp,self,sm ...
then Open MPI will multiplex messages on both ethernet and IPoIB

Cheers,

Gilles
Hi, 
I have trying to run the Intel IMB benchmarks to compare the performance of Infiniband (IB) vs Ethernet. However, I am not seeing any difference in performance even for communication intensive benchmarks, such as alltoallv. 
mpirun --mca btl self,openib,sm  -hostfile hosts_ib  IMB-MPI1 alltoallv 
The hosts_ib file contains the IP addresses of the infiniband interfaces, but the performance is the same when I deactivate the IB interfaces and use my hosts_eth file which has the IP addresses of the ethernet interfaces. Am I missing something? What is really happening when I specify the openib btl if I am using the ethernet network?
Thanks
Rodrigo Escobar
2017-03-20 17:07:39 UTC
Permalink
Thanks Guilles for the quick reply. I think I am confused about what the
openib BTL specifies.
What am I doing when I run with the openib BTL but specify my eth interface
(...and deactivate my IB interfaces)?
Is not openib only for IB interfaces?
Am I using RDMA here?

These two commands give the same performance:
mpirun --mca btl openib,self,sm -hostfile hosts_eth ... (With IB
interfaces down)
mpirun --mca btl openib,self,sm -hostfile hosts_ib0 ...

Regards,
Rodrigo

On Mon, Mar 20, 2017 at 8:29 AM, Gilles Gouaillardet <
Post by Gilles Gouaillardet
You will get similar results with hosts_ib and hosts_eth
If you want to use tcp over ethernet, you have to
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include eth0 ...
If you want to use tcp over ib, then
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include ib0 ...
Keep in mind that IMB calls MPI_Init_thread(MPI_THREAD_MULTIPLE)
this is not only unnecessary here, but it also has an impact on
performances (with older versions, Open MPI felt back on IPoIB,
with v2.1rc the impact should be minimal)
If you simply
mpirun --mca btl tcp,self,sm ...
then Open MPI will multiplex messages on both ethernet and IPoIB
Cheers,
Gilles
Hi,
I have trying to run the Intel IMB benchmarks to compare the performance
of Infiniband (IB) vs Ethernet. However, I am not seeing any difference in
performance even for communication intensive benchmarks, such as alltoallv.
Each one of my machines has one ethernet interface and an infiniband
mpirun --mca btl self,openib,sm -hostfile hosts_ib IMB-MPI1 alltoallv
The hosts_ib file contains the IP addresses of the infiniband interfaces,
but the performance is the same when I deactivate the IB interfaces and use
my hosts_eth file which has the IP addresses of the ethernet interfaces. Am
I missing something? What is really happening when I specify the openib btl
if I am using the ethernet network?
Thanks
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Gilles Gouaillardet
2017-03-21 00:13:16 UTC
Permalink
Rodrigo,


i do not understand what you mean by "deactivate my IB interfaces"


the hostfile is only used in the wire-up phase

(to keep things simple, mpirun does

ssh <hostname> orted

under the hood, and <hostname> is coming from your hostfile.


so bottom line

mpirun --mca btl openib,self,sm -hostfile hosts_eth ... (With IB
interfaces down)
mpirun --mca btl openib,self,sm -hostfile hosts_ib0 ...

are expected to have the same performance


since you have some Infiniband hardware, there are two options

- you built Open MPI with MXM support, in this case you do not use the
btl/openib, but pml/cm and mtl/mxm

if you want to force the btl/openib, you have to

mpirun --mca pml ob1 --mca btl openib,self,sm ...

- you did not build Open MPI with MXM support, in this case, btl/openib
is used for inter node communications,

and btl/sm is used for intra node communications.


if you want the performance numbers for tcp over ethernet, your command
line is

mpirun --mca btl tcp,self,sm --mca pml ob1 --mca btl_tcp_if_include eth0
-hostfile hosts_eth ...


Cheers,


Gilles
Post by Rodrigo Escobar
Thanks Guilles for the quick reply. I think I am confused about what
the openib BTL specifies.
What am I doing when I run with the openib BTL but specify my eth
interface (...and deactivate my IB interfaces)?
Is not openib only for IB interfaces?
Am I using RDMA here?
mpirun --mca btl openib,self,sm -hostfile hosts_eth ... (With IB
interfaces down)
mpirun --mca btl openib,self,sm -hostfile hosts_ib0 ...
Regards,
Rodrigo
On Mon, Mar 20, 2017 at 8:29 AM, Gilles Gouaillardet
You will get similar results with hosts_ib and hosts_eth
If you want to use tcp over ethernet, you have to
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include eth0 ...
If you want to use tcp over ib, then
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include ib0 ...
Keep in mind that IMB calls MPI_Init_thread(MPI_THREAD_MULTIPLE)
this is not only unnecessary here, but it also has an impact on
performances (with older versions, Open MPI felt back on IPoIB,
with v2.1rc the impact should be minimal)
If you simply
mpirun --mca btl tcp,self,sm ...
then Open MPI will multiplex messages on both ethernet and IPoIB
Cheers,
Gilles
Hi,
I have trying to run the Intel IMB benchmarks to compare the
performance of Infiniband (IB) vs Ethernet. However, I am not
seeing any difference in performance even for communication
intensive benchmarks, such as alltoallv.
Each one of my machines has one ethernet interface and an
infiniband interface. I use the following command to run the
mpirun --mca btl self,openib,sm -hostfile hosts_ib IMB-MPI1 alltoallv
The hosts_ib file contains the IP addresses of the infiniband
interfaces, but the performance is the same when I deactivate the
IB interfaces and use my hosts_eth file which has the IP addresses
of the ethernet interfaces. Am I missing something? What is really
happening when I specify the openib btl if I am using the ethernet
network?
Thanks
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
<https://rfd.newmexicoconsortium.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Rodrigo Escobar
2017-03-21 04:41:26 UTC
Permalink
Thank you Gilles, I think that has made it clear.
Regards,
Rodrigo
Post by Gilles Gouaillardet
Rodrigo,
i do not understand what you mean by "deactivate my IB interfaces"
the hostfile is only used in the wire-up phase
(to keep things simple, mpirun does
ssh <hostname> orted
under the hood, and <hostname> is coming from your hostfile.
so bottom line
mpirun --mca btl openib,self,sm -hostfile hosts_eth ... (With IB
interfaces down)
mpirun --mca btl openib,self,sm -hostfile hosts_ib0 ...
are expected to have the same performance
since you have some Infiniband hardware, there are two options
- you built Open MPI with MXM support, in this case you do not use the
btl/openib, but pml/cm and mtl/mxm
if you want to force the btl/openib, you have to
mpirun --mca pml ob1 --mca btl openib,self,sm ...
- you did not build Open MPI with MXM support, in this case, btl/openib is
used for inter node communications,
and btl/sm is used for intra node communications.
if you want the performance numbers for tcp over ethernet, your command
line is
mpirun --mca btl tcp,self,sm --mca pml ob1 --mca btl_tcp_if_include eth0
-hostfile hosts_eth ...
Cheers,
Gilles
Thanks Guilles for the quick reply. I think I am confused about what the
openib BTL specifies.
What am I doing when I run with the openib BTL but specify my eth
interface (...and deactivate my IB interfaces)?
Is not openib only for IB interfaces?
Am I using RDMA here?
mpirun --mca btl openib,self,sm -hostfile hosts_eth ... (With IB
interfaces down)
mpirun --mca btl openib,self,sm -hostfile hosts_ib0 ...
Regards,
Rodrigo
On Mon, Mar 20, 2017 at 8:29 AM, Gilles Gouaillardet <
Post by Gilles Gouaillardet
You will get similar results with hosts_ib and hosts_eth
If you want to use tcp over ethernet, you have to
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include eth0 ...
If you want to use tcp over ib, then
mpirun --mca btl tcp,self,sm --mca btl_tcp_if_include ib0 ...
Keep in mind that IMB calls MPI_Init_thread(MPI_THREAD_MULTIPLE)
this is not only unnecessary here, but it also has an impact on
performances (with older versions, Open MPI felt back on IPoIB,
with v2.1rc the impact should be minimal)
If you simply
mpirun --mca btl tcp,self,sm ...
then Open MPI will multiplex messages on both ethernet and IPoIB
Cheers,
Gilles
Hi,
I have trying to run the Intel IMB benchmarks to compare the performance
of Infiniband (IB) vs Ethernet. However, I am not seeing any difference in
performance even for communication intensive benchmarks, such as alltoallv.
Each one of my machines has one ethernet interface and an infiniband
mpirun --mca btl self,openib,sm -hostfile hosts_ib IMB-MPI1 alltoallv
The hosts_ib file contains the IP addresses of the infiniband interfaces,
but the performance is the same when I deactivate the IB interfaces and use
my hosts_eth file which has the IP addresses of the ethernet interfaces. Am
I missing something? What is really happening when I specify the openib btl
if I am using the ethernet network?
Thanks
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Loading...