Discussion:
[OMPI users] [OMPI USERS] Jumbo frames
Alberto Ortiz
2017-05-05 14:16:20 UTC
Permalink
Hi,
I have a program running with openMPI over a network using a gigabit
switch. This switch supports jumbo frames up to 13.000 bytes, so, in order
to test and see if it would be faster communicating with this frame
lengths, I am trying to use them with my program. I have set the MTU in
each node to be 13.000 but when running the program it doesn't even
initiate, it gets blocked. I have tried different lengths from 1.500 up to
13.000 but it doesn't work with any length.

I have searched and only found that I have to set OMPI with "-mca
btl_openib_ib_mtu 13000" or the length to be used, but I don't seem to get
it working.

Which are the steps to get OMPI to use larger TCP packets length? Is it
possible to reach 13000 bytes instead of the standard 1500?

Thank yo in advance,
Alberto
r***@open-mpi.org
2017-05-05 14:19:28 UTC
Permalink
If you are looking to use TCP packets, then you want to set the send/recv buffer size in the TCP btl, not the openib one, yes?

Also, what version of OMPI are you using?
Hi,
I have a program running with openMPI over a network using a gigabit switch. This switch supports jumbo frames up to 13.000 bytes, so, in order to test and see if it would be faster communicating with this frame lengths, I am trying to use them with my program. I have set the MTU in each node to be 13.000 but when running the program it doesn't even initiate, it gets blocked. I have tried different lengths from 1.500 up to 13.000 but it doesn't work with any length.
I have searched and only found that I have to set OMPI with "-mca btl_openib_ib_mtu 13000" or the length to be used, but I don't seem to get it working.
Which are the steps to get OMPI to use larger TCP packets length? Is it possible to reach 13000 bytes instead of the standard 1500?
Thank yo in advance,
Alberto
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Alberto Ortiz
2017-05-05 14:29:44 UTC
Permalink
I am using version 1.10.6 on archlinux.
The option I should pass to mpirun should then be "-mca btl_tcp_mtu 13000"?
Just to be sure.
Thank you,
Alberto
Post by r***@open-mpi.org
If you are looking to use TCP packets, then you want to set the send/recv
buffer size in the TCP btl, not the openib one, yes?
Also, what version of OMPI are you using?
Post by Alberto Ortiz
Hi,
I have a program running with openMPI over a network using a gigabit
switch. This switch supports jumbo frames up to 13.000 bytes, so, in order
to test and see if it would be faster communicating with this frame
lengths, I am trying to use them with my program. I have set the MTU in
each node to be 13.000 but when running the program it doesn't even
initiate, it gets blocked. I have tried different lengths from 1.500 up to
13.000 but it doesn't work with any length.
Post by Alberto Ortiz
I have searched and only found that I have to set OMPI with "-mca
btl_openib_ib_mtu 13000" or the length to be used, but I don't seem to get
it working.
Post by Alberto Ortiz
Which are the steps to get OMPI to use larger TCP packets length? Is it
possible to reach 13000 bytes instead of the standard 1500?
Post by Alberto Ortiz
Thank yo in advance,
Alberto
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
George Bosilca
2017-05-05 14:41:21 UTC
Permalink
"ompi_info --param btl tcp -l 9" will give you all the TCP options.
Unfortunately, OMPI does not support programatically changing the value of
the MTU.

George.

PS: We would be happy to receive contributions from the community.
Post by Alberto Ortiz
I am using version 1.10.6 on archlinux.
The option I should pass to mpirun should then be "-mca btl_tcp_mtu
13000"? Just to be sure.
Thank you,
Alberto
Post by r***@open-mpi.org
If you are looking to use TCP packets, then you want to set the send/recv
buffer size in the TCP btl, not the openib one, yes?
Also, what version of OMPI are you using?
Post by Alberto Ortiz
Hi,
I have a program running with openMPI over a network using a gigabit
switch. This switch supports jumbo frames up to 13.000 bytes, so, in order
to test and see if it would be faster communicating with this frame
lengths, I am trying to use them with my program. I have set the MTU in
each node to be 13.000 but when running the program it doesn't even
initiate, it gets blocked. I have tried different lengths from 1.500 up to
13.000 but it doesn't work with any length.
Post by Alberto Ortiz
I have searched and only found that I have to set OMPI with "-mca
btl_openib_ib_mtu 13000" or the length to be used, but I don't seem to get
it working.
Post by Alberto Ortiz
Which are the steps to get OMPI to use larger TCP packets length? Is it
possible to reach 13000 bytes instead of the standard 1500?
Post by Alberto Ortiz
Thank yo in advance,
Alberto
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Barrett, Brian via users
2017-05-05 18:09:49 UTC
Permalink
But in many ways, it’s also not helpful to change the MTU from Open MPI. It sounds like you made a bunch of changes all at once; I’d break them down and build up. MTU is a very system-level configuration. Use a tcp transmission test (iperf, etc.) to make sure TCP connections work between the nodes. Once that’s working, you can start with Open MPI. While Open MPI doesn’t have a way to set MTU, it can adjust how big the messages it passes to write() are, which will result in the same thing if the system is well configured. In particular, you can start with moving around the eager frag limit, although that can have impact on memory consumption.

But, as I said, first thing is to get your operating system and networking gear set up properly. It sounds like you’re not quite there yet, but it’s doubtful that this list will be the place to get help.

Brian

On May 5, 2017, at 7:41 AM, George Bosilca <***@icl.utk.edu<mailto:***@icl.utk.edu>> wrote:

"ompi_info --param btl tcp -l 9" will give you all the TCP options. Unfortunately, OMPI does not support programatically changing the value of the MTU.

George.

PS: We would be happy to receive contributions from the community.


On Fri, May 5, 2017 at 10:29 AM, Alberto Ortiz <***@gmail.com<mailto:***@gmail.com>> wrote:
I am using version 1.10.6 on archlinux.
The option I should pass to mpirun should then be "-mca btl_tcp_mtu 13000"? Just to be sure.
Thank you,
Alberto

El 5 may. 2017 16:26, "***@open-mpi.org<mailto:***@open-mpi.org>" <***@open-mpi.org<mailto:***@open-mpi.org>> escribió:
If you are looking to use TCP packets, then you want to set the send/recv buffer size in the TCP btl, not the openib one, yes?

Also, what version of OMPI are you using?
Hi,
I have a program running with openMPI over a network using a gigabit switch. This switch supports jumbo frames up to 13.000 bytes, so, in order to test and see if it would be faster communicating with this frame lengths, I am trying to use them with my program. I have set the MTU in each node to be 13.000 but when running the program it doesn't even initiate, it gets blocked. I have tried different lengths from 1.500 up to 13.000 but it doesn't work with any length.
I have searched and only found that I have to set OMPI with "-mca btl_openib_ib_mtu 13000" or the length to be used, but I don't seem to get it working.
Which are the steps to get OMPI to use larger TCP packets length? Is it possible to reach 13000 bytes instead of the standard 1500?
Thank yo in advance,
Alberto
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
***@lists.open-mpi.org<mailto:***@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
***@lists.open-mpi.org<mailto:***@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
***@lists.open-mpi.org<mailto:***@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Loading...