Discussion:
[OMPI users] Groups and Communicators
Diego Avesani
2017-07-25 14:44:41 UTC
Permalink
Dear All,

I am studying Groups and Communicators, but before start going in detail, I
have a question about groups.

I would like to know if is it possible to create a group of masters of the
other groups and then a intra-communication in the new group. I have spent
sometime reading different tutorial and presentation, but it is difficult,
at least for me, to understand if is it possible to create this sort of MPI
cast in another MPI.

In the attachment you can find a pictures that summarize what I would like
to do.

Another strategies could be use virtual topology.

What do you think?

I really, really, appreciate any kind of help, suggestions or link where I
can study this topics.

Again, thanks

Best Regards,

Diego
George Bosilca
2017-07-25 17:26:13 UTC
Permalink
Diego,

Assuming you have some common grounds between the 4 initial groups
(otherwise you will have to connect them via
MPI_Comm_connect/MPI_Comm_accept) you can merge the 4 groups together and
then use any MPI mechanism to create a partial group of leaders (such as
MPI_Comm_split).

If you spawn the groups via MPI_Comm_spawn then the answer is slightly more
complicated, you need to use MPI_Intercomm_create, with the spawner as the
bridge between the different communicators (and then MPI_Intercomm_merge to
create your intracomm). You can find a good answer on stackoverflow on this
at
https://stackoverflow.com/questions/24806782/mpi-merge-multiple-intercoms-into-a-single-intracomm

How is your MPI environment started (single mpirun or mpi_comm_spawn) ?

George.
Post by Diego Avesani
Dear All,
I am studying Groups and Communicators, but before start going in detail,
I have a question about groups.
I would like to know if is it possible to create a group of masters of the
other groups and then a intra-communication in the new group. I have spent
sometime reading different tutorial and presentation, but it is difficult,
at least for me, to understand if is it possible to create this sort of MPI
cast in another MPI.
In the attachment you can find a pictures that summarize what I would like
to do.
Another strategies could be use virtual topology.
What do you think?
I really, really, appreciate any kind of help, suggestions or link where I
can study this topics.
Again, thanks
Best Regards,
Diego
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Jeff Squyres (jsquyres)
2017-08-02 17:36:37 UTC
Permalink
George --

Just to be clear, I was not suggesting that he split on a color of MPI_COMM_NULL. His last snippet of code was:

-----
CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr)
CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,MASTER_COMM,iErr)
!
IF(MPI_COMM_NULL .NE. MASTER_COMM)THEN
CALL MPI_COMM_RANK(MASTER_COMM, MPImaster%rank,MPIlocal%iErr)
CALL MPI_COMM_SIZE(MASTER_COMM, MPImaster%nCPU,MPIlocal%iErr)
ELSE
MPImaster%rank = MPI_PROC_NULL
ENDIF

...

IF(MPImaster%rank.GE.0)THEN
CALL MPI_SCATTER(PP, 10, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0, MASTER_COMM, iErr)
ENDIF
-----

In particular, the last "IF(MPImaster%rank.GE.0)" -- he's checking to see if the MPImaster%rank was set to MPI_PROC_NULL. I was just suggesting that he change that to "IF(MPI_COMM_NULL .NE. MASTER_COMM)" -- i.e., he shouldn't make any assumptions about the value of MPI_PROC_NULL, etc.
Post by George Bosilca
Diego,
Setting the color to MPI_COMM_NULL is not good, as it results in some random value (and not the MPI_UNDEFINED that do not generate a communicator). Change the color to MPI_UNDEFINED and your application should work just fine (in the sense that all processes not in the master communicator will have the master_comm variable set to MPI_COMM_NULL).
George.
Dear Jeff, Dear all,
thanks, I will try immediately.
thanks again
Diego
Just like in your original code snippet, you can
If (master_comm .ne. Mpi_comm_null) then
...
Sent from my phone. No type good.
Dear all, Dear Jeff,
I am very sorry, but I do not know how to do this kind of comparison.
CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr)
CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,MASTER_COMM,iErr)
!
IF(MPI_COMM_NULL .NE. MASTER_COMM)THEN
CALL MPI_COMM_RANK(MASTER_COMM, MPImaster%rank,MPIlocal%iErr)
CALL MPI_COMM_SIZE(MASTER_COMM, MPImaster%nCPU,MPIlocal%iErr)
ELSE
MPImaster%rank = MPI_PROC_NULL
ENDIF
and then
IF(MPImaster%rank.GE.0)THEN
CALL MPI_SCATTER(PP, 10, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0, MASTER_COMM, iErr)
ENDIF
What I should compare?
Thanks again
Diego
CALL MPI_SCATTER(PP, npart, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0, MASTER_COMM, iErr)
IF(rank.LT.0)THEN
CALL MPI_SCATTER(PP, npart, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0, MASTER_COMM, iErr)
ENDIF
MPI_PROC_NULL is a sentinel value; I don't think you can make any assumptions about its value (i.e., that it's negative). In practice, it probably always is, but if you want to check the rank, you should compare it to MPI_PROC_NULL.
That being said, comparing MASTER_COMM to MPI_COMM_NULL is no more expensive than comparing an integer. So that might be a bit more expressive to read / easier to maintain over time, and it won't cost you any performance.
--
Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Squyres
***@cisco.com
George Bosilca
2017-08-03 05:38:28 UTC
Permalink
I was commenting on one of Diego's previous solutions where all non-master
processes were using the color set to MPI_COMM_NULL in MPI_COMM_SPLIT.

Overall comparing with MPI_COMM_NULL as you suggested is indeed the
cleanest solution.

George.
Post by Jeff Squyres (jsquyres)
George --
Just to be clear, I was not suggesting that he split on a color of
-----
CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr)
CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,
MASTER_COMM,iErr)
!
IF(MPI_COMM_NULL .NE. MASTER_COMM)THEN
CALL MPI_COMM_RANK(MASTER_COMM, MPImaster%rank,MPIlocal%iErr)
CALL MPI_COMM_SIZE(MASTER_COMM, MPImaster%nCPU,MPIlocal%iErr)
ELSE
MPImaster%rank = MPI_PROC_NULL
ENDIF
...
IF(MPImaster%rank.GE.0)THEN
CALL MPI_SCATTER(PP, 10, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0, MASTER_COMM, iErr)
ENDIF
-----
In particular, the last "IF(MPImaster%rank.GE.0)" -- he's checking to see
if the MPImaster%rank was set to MPI_PROC_NULL. I was just suggesting that
he change that to "IF(MPI_COMM_NULL .NE. MASTER_COMM)" -- i.e., he
shouldn't make any assumptions about the value of MPI_PROC_NULL, etc.
Post by George Bosilca
Diego,
Setting the color to MPI_COMM_NULL is not good, as it results in some
random value (and not the MPI_UNDEFINED that do not generate a
communicator). Change the color to MPI_UNDEFINED and your application
should work just fine (in the sense that all processes not in the master
communicator will have the master_comm variable set to MPI_COMM_NULL).
Post by George Bosilca
George.
Dear Jeff, Dear all,
thanks, I will try immediately.
thanks again
Diego
Just like in your original code snippet, you can
If (master_comm .ne. Mpi_comm_null) then
...
Sent from my phone. No type good.
Dear all, Dear Jeff,
I am very sorry, but I do not know how to do this kind of comparison.
CALL MPI_GROUP_INCL(GROUP_WORLD, nPSObranch, MRANKS, MASTER_GROUP,ierr)
CALL MPI_COMM_CREATE_GROUP(MPI_COMM_WORLD,MASTER_GROUP,0,
MASTER_COMM,iErr)
Post by George Bosilca
!
IF(MPI_COMM_NULL .NE. MASTER_COMM)THEN
CALL MPI_COMM_RANK(MASTER_COMM, MPImaster%rank,MPIlocal%iErr)
CALL MPI_COMM_SIZE(MASTER_COMM, MPImaster%nCPU,MPIlocal%iErr)
ELSE
MPImaster%rank = MPI_PROC_NULL
ENDIF
and then
IF(MPImaster%rank.GE.0)THEN
CALL MPI_SCATTER(PP, 10, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0,
MASTER_COMM, iErr)
Post by George Bosilca
ENDIF
What I should compare?
Thanks again
Diego
CALL MPI_SCATTER(PP, npart, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0,
MASTER_COMM, iErr)
Post by George Bosilca
I get an error. This because some CPU does not belong to MATER_COMM.
IF(rank.LT.0)THEN
CALL MPI_SCATTER(PP, npart, MPI_DOUBLE, PPL, 10,MPI_DOUBLE, 0,
MASTER_COMM, iErr)
Post by George Bosilca
ENDIF
MPI_PROC_NULL is a sentinel value; I don't think you can make any
assumptions about its value (i.e., that it's negative). In practice, it
probably always is, but if you want to check the rank, you should compare
it to MPI_PROC_NULL.
Post by George Bosilca
That being said, comparing MASTER_COMM to MPI_COMM_NULL is no more
expensive than comparing an integer. So that might be a bit more expressive
to read / easier to maintain over time, and it won't cost you any
performance.
Post by George Bosilca
--
Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Loading...