Discussion:
[OMPI users] MPI_MAXLOC problems
Diego Avesani
2018-08-22 09:49:55 UTC
Permalink
Dear all,

I am going to start again the discussion about MPI_MAXLOC. We had one a
couple of week before with George, Ray, Nathan, Jeff S, Jeff S., Gus.

This because I have a problem. I have two groups and two communicators.
The first one takes care of compute the maximum vale and to which processor
it belongs:

nPart = 100

IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN

CALL MPI_ALLREDUCE( EFFMAX, EFFMAXW, 2, MPI_2DOUBLE_PRECISION, MPI_MAXLOC,
MPI_MASTER_COMM,MPImaster%iErr )
whosend = INT(EFFMAXW(2))
gpeff = EFFMAXW(1)
CALL MPI_BCAST(whosend,1,MPI_INTEGER,whosend,MPI_MASTER_COMM,MPImaster%iErr)

ENDIF

If I perform this, the program set to zero one variable, specifically
nPart.

if I print:

IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
WRITE(*,*) MPImaster%rank,nPart
ELSE
WRITE(*,*) MPIlocal%rank,nPart
ENDIF

I get;

1 2
1 2
3 2
3 2
2 2
2 2
1 2
1 2
3 2
3 2
2 2
2 2


1 0
1 0
0 0
0 0

This seems some typical memory allocation problem.

What do you think?

Thanks for any kind of help.




Diego
Gilles Gouaillardet
2018-08-22 12:02:34 UTC
Permalink
Diego,

Try calling allreduce with count=1

Cheers,

Gilles
Post by Diego Avesani
Dear all,
I am going to start again the discussion about MPI_MAXLOC. We had one a
couple of week before with George, Ray, Nathan, Jeff S, Jeff S., Gus.
This because I have a problem. I have two groups and two communicators.
The first one takes care of compute the maximum vale and to which
nPart = 100
IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
CALL MPI_ALLREDUCE( EFFMAX, EFFMAXW, 2, MPI_2DOUBLE_PRECISION, MPI_MAXLOC,
MPI_MASTER_COMM,MPImaster%iErr )
whosend = INT(EFFMAXW(2))
gpeff = EFFMAXW(1)
CALL MPI_BCAST(whosend,1,MPI_INTEGER,whosend,MPI_MASTER_
COMM,MPImaster%iErr)
ENDIF
If I perform this, the program set to zero one variable, specifically
nPart.
IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
WRITE(*,*) MPImaster%rank,nPart
ELSE
WRITE(*,*) MPIlocal%rank,nPart
ENDIF
I get;
1 2
1 2
3 2
3 2
2 2
2 2
1 2
1 2
3 2
3 2
2 2
2 2
1 0
1 0
0 0
0 0
This seems some typical memory allocation problem.
What do you think?
Thanks for any kind of help.
Diego
Jeff Squyres (jsquyres) via users
2018-08-25 12:53:05 UTC
Permalink
I think Gilles is right: remember that datatypes like MPI_2DOUBLE_PRECISION are actually 2 values. So if you want to send 1 pair of double precision values with MPI_2DOUBLE_PRECISION, then your count is actually 1.
Post by Gilles Gouaillardet
Diego,
Try calling allreduce with count=1
Cheers,
Gilles
Dear all,
I am going to start again the discussion about MPI_MAXLOC. We had one a couple of week before with George, Ray, Nathan, Jeff S, Jeff S., Gus.
This because I have a problem. I have two groups and two communicators.
nPart = 100
IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
CALL MPI_ALLREDUCE( EFFMAX, EFFMAXW, 2, MPI_2DOUBLE_PRECISION, MPI_MAXLOC, MPI_MASTER_COMM,MPImaster%iErr )
whosend = INT(EFFMAXW(2))
gpeff = EFFMAXW(1)
CALL MPI_BCAST(whosend,1,MPI_INTEGER,whosend,MPI_MASTER_COMM,MPImaster%iErr)
ENDIF
If I perform this, the program set to zero one variable, specifically nPart.
IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
WRITE(*,*) MPImaster%rank,nPart
ELSE
WRITE(*,*) MPIlocal%rank,nPart
ENDIF
I get;
1 2
1 2
3 2
3 2
2 2
2 2
1 2
1 2
3 2
3 2
2 2
2 2
1 0
1 0
0 0
0 0
This seems some typical memory allocation problem.
What do you think?
Thanks for any kind of help.
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Squyres
***@cisco.com
Nathan Hjelm via users
2018-08-28 20:58:05 UTC
Permalink
Yup. That is the case for all composed datatype which is what the tuple types are. Predefined composed datatypes.

-Nathan

On Aug 28, 2018, at 02:35 PM, "Jeff Squyres (jsquyres) via users" <***@lists.open-mpi.org> wrote:

I think Gilles is right: remember that datatypes like MPI_2DOUBLE_PRECISION are actually 2 values. So if you want to send 1 pair of double precision values with MPI_2DOUBLE_PRECISION, then your count is actually 1.


On Aug 22, 2018, at 8:02 AM, Gilles Gouaillardet <***@gmail.com> wrote:

Diego,

Try calling allreduce with count=1

Cheers,

Gilles

On Wednesday, August 22, 2018, Diego Avesani <***@gmail.com> wrote:
Dear all,

I am going to start again the discussion about MPI_MAXLOC. We had one a couple of week before with George, Ray, Nathan, Jeff S, Jeff S., Gus.

This because I have a problem. I have two groups and two communicators.
The first one takes care of compute the maximum vale and to which processor it belongs:

nPart = 100

IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN

CALL MPI_ALLREDUCE( EFFMAX, EFFMAXW, 2, MPI_2DOUBLE_PRECISION, MPI_MAXLOC, MPI_MASTER_COMM,MPImaster%iErr )
whosend = INT(EFFMAXW(2))
gpeff = EFFMAXW(1)
CALL MPI_BCAST(whosend,1,MPI_INTEGER,whosend,MPI_MASTER_COMM,MPImaster%iErr)

ENDIF

If I perform this, the program set to zero one variable, specifically nPart.

if I print:

IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
WRITE(*,*) MPImaster%rank,nPart
ELSE
WRITE(*,*) MPIlocal%rank,nPart
ENDIF

I get;

1 2
1 2
3 2
3 2
2 2
2 2
1 2
1 2
3 2
3 2
2 2
2 2


1 0
1 0
0 0
0 0

This seems some typical memory allocation problem.

What do you think?

Thanks for any kind of help.




Diego

_______________________________________________
users mailing list
***@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Squyres
***@cisco.com
Diego Avesani
2018-08-31 11:52:52 UTC
Permalink
Dear all,
thanks a lot.

Diego



On Wed, 29 Aug 2018 at 00:13, Nathan Hjelm via users <
Post by Nathan Hjelm via users
Yup. That is the case for all composed datatype which is what the tuple
types are. Predefined composed datatypes.
-Nathan
On Aug 28, 2018, at 02:35 PM, "Jeff Squyres (jsquyres) via users" <
I think Gilles is right: remember that datatypes like
MPI_2DOUBLE_PRECISION are actually 2 values. So if you want to send 1 pair
of double precision values with MPI_2DOUBLE_PRECISION, then your count is
actually 1.
On Aug 22, 2018, at 8:02 AM, Gilles Gouaillardet <
Diego,
Try calling allreduce with count=1
Cheers,
Gilles
Dear all,
I am going to start again the discussion about MPI_MAXLOC. We had one a
couple of week before with George, Ray, Nathan, Jeff S, Jeff S., Gus.
This because I have a problem. I have two groups and two communicators.
nPart = 100
IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
CALL MPI_ALLREDUCE( EFFMAX, EFFMAXW, 2, MPI_2DOUBLE_PRECISION, MPI_MAXLOC,
MPI_MASTER_COMM,MPImaster%iErr )
whosend = INT(EFFMAXW(2))
gpeff = EFFMAXW(1)
CALL
MPI_BCAST(whosend,1,MPI_INTEGER,whosend,MPI_MASTER_COMM,MPImaster%iErr)
ENDIF
If I perform this, the program set to zero one variable, specifically nPart.
IF(MPI_COMM_NULL .NE. MPI_MASTER_COMM)THEN
WRITE(*,*) MPImaster%rank,nPart
ELSE
WRITE(*,*) MPIlocal%rank,nPart
ENDIF
I get;
1 2
1 2
3 2
3 2
2 2
2 2
1 2
1 2
3 2
3 2
2 2
2 2
1 0
1 0
0 0
0 0
This seems some typical memory allocation problem.
What do you think?
Thanks for any kind of help.
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Loading...