Discussion:
[OMPI users] know which CPU has the maximum value
Diego Avesani
2018-08-10 14:39:02 UTC
Permalink
Dear all,

I have a problem:
In my parallel program each CPU compute a value, let's say eff.

First of all, I would like to know the maximum value. This for me is quite
simple,
I apply the following:

CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX,
MPI_MASTER_COMM, MPIworld%iErr)


However, I would like also to know to which CPU that value belongs. Is it
possible?

I have set-up a strange procedure but it works only when all the CPUs has
different values but fails when two of then has the same eff value.

Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?

really, really thanks.
Diego


Diego
Reuti
2018-08-10 14:49:32 UTC
Permalink
Hi,
Post by Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?

-- Reuti
Post by Diego Avesani
However, I would like also to know to which CPU that value belongs. Is it possible?
I have set-up a strange procedure but it works only when all the CPUs has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Ray Sheppard
2018-08-10 15:03:28 UTC
Permalink
As a dumb scientist, I would just bcast the value I get back to the
group and ask whoever owns it to kindly reply back with its rank.
     Ray
Post by Reuti
Hi,
Post by Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
Post by Diego Avesani
However, I would like also to know to which CPU that value belongs. Is it possible?
I have set-up a strange procedure but it works only when all the CPUs has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Diego Avesani
2018-08-10 15:19:36 UTC
Permalink
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in a
vector and not the CPU to which the valur belongs to.

@ray: and if two has the same value?

thanks


Diego
As a dumb scientist, I would just bcast the value I get back to the group
and ask whoever owns it to kindly reply back with its rank.
Ray
Post by Reuti
Hi,
Post by Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX,
MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs. Is it
Post by Diego Avesani
possible?
I have set-up a strange procedure but it works only when all the CPUs
has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Diego Avesani
2018-08-10 15:24:37 UTC
Permalink
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.

Have I understood correctly?
thanks

Diego
Post by Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in a
vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to the group
and ask whoever owns it to kindly reply back with its rank.
Ray
Post by Reuti
Hi,
Post by Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX,
MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs. Is
Post by Diego Avesani
it possible?
I have set-up a strange procedure but it works only when all the CPUs
has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
George Bosilca
2018-08-10 15:36:46 UTC
Permalink
You will need to create a special variable that holds 2 entries, one for
the max operation (with whatever type you need) and an int for the rank of
the process. The MAXLOC is described on the OMPI man page [1] and you can
find an example on how to use it on the MPI Forum [2].

George.


[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
[2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
Post by Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
Post by Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in a
vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
Post by Ray Sheppard
As a dumb scientist, I would just bcast the value I get back to the
group and ask whoever owns it to kindly reply back with its rank.
Ray
Post by Reuti
Hi,
Post by Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX,
MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs. Is
Post by Diego Avesani
it possible?
I have set-up a strange procedure but it works only when all the CPUs
has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Nathan Hjelm via users
2018-08-10 15:39:32 UTC
Permalink
The problem is minloc and maxloc need to go away. better to use a custom op.
You will need to create a special variable that holds 2 entries, one for the max operation (with whatever type you need) and an int for the rank of the process. The MAXLOC is described on the OMPI man page [1] and you can find an example on how to use it on the MPI Forum [2].
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
[2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
Post by Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
Post by Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in a vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to the group and ask whoever owns it to kindly reply back with its rank.
Ray
Post by Reuti
Hi,
Post by Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
Post by Diego Avesani
However, I would like also to know to which CPU that value belongs. Is it possible?
I have set-up a strange procedure but it works only when all the CPUs has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Diego Avesani
2018-08-10 15:47:01 UTC
Permalink
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.

thanks

Diego
Post by Nathan Hjelm via users
The problem is minloc and maxloc need to go away. better to use a custom op.
You will need to create a special variable that holds 2 entries, one for
the max operation (with whatever type you need) and an int for the rank of
the process. The MAXLOC is described on the OMPI man page [1] and you can
find an example on how to use it on the MPI Forum [2].
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
[2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
Post by Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
Post by Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in
a vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
Post by Ray Sheppard
As a dumb scientist, I would just bcast the value I get back to the
group and ask whoever owns it to kindly reply back with its rank.
Ray
Post by Reuti
Hi,
Post by Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is
quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs. Is
Post by Diego Avesani
it possible?
I have set-up a strange procedure but it works only when all the CPUs
has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Nathan Hjelm via users
2018-08-10 17:03:52 UTC
Permalink
They do not fit with the rest of the predefined operations (which operate on a single basic type) and can easily be implemented as user defined operations and get the same performance. Add to that the fixed number of tuple types and the fact that some of them are non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could kill them in MPI-4 I would.
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
Post by Nathan Hjelm via users
The problem is minloc and maxloc need to go away. better to use a custom op.
You will need to create a special variable that holds 2 entries, one for the max operation (with whatever type you need) and an int for the rank of the process. The MAXLOC is described on the OMPI man page [1] and you can find an example on how to use it on the MPI Forum [2].
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
[2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
Post by Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
Post by Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in a vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to the group and ask whoever owns it to kindly reply back with its rank.
Ray
Post by Reuti
Hi,
Post by Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
Post by Diego Avesani
However, I would like also to know to which CPU that value belongs. Is it possible?
I have set-up a strange procedure but it works only when all the CPUs has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Diego Avesani
2018-08-10 17:13:12 UTC
Permalink
I agree about the names, it is very similar to MIN_LOC and MAX_LOC in
fortran 90.
However, I find difficult to define some algorithm able to do the same
things.



Diego
Post by Nathan Hjelm via users
They do not fit with the rest of the predefined operations (which operate
on a single basic type) and can easily be implemented as user defined
operations and get the same performance. Add to that the fixed number of
tuple types and the fact that some of them are non-contiguous
(MPI_SHORT_INT) plus the terrible names. If I could kill them in MPI-4 I
would.
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
On 10 August 2018 at 17:39, Nathan Hjelm via users <
Post by Nathan Hjelm via users
The problem is minloc and maxloc need to go away. better to use a custom op.
You will need to create a special variable that holds 2 entries, one for
the max operation (with whatever type you need) and an int for the rank of
the process. The MAXLOC is described on the OMPI man page [1] and you can
find an example on how to use it on the MPI Forum [2].
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
[2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
Post by Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
Post by Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in
a vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
Post by Ray Sheppard
As a dumb scientist, I would just bcast the value I get back to the
group and ask whoever owns it to kindly reply back with its rank.
Ray
Post by Reuti
Hi,
Post by Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is
quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs.
Post by Diego Avesani
Is it possible?
I have set-up a strange procedure but it works only when all the
CPUs has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Jeff Squyres (jsquyres) via users
2018-08-10 17:27:04 UTC
Permalink
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.

As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4, meaning that they'll likely continue to be in MPI for at least another 10 years. :-)

(And even if they did get killed in MPI-4, implementations like Open MPI would continue to keep them in our implementations for quite a while -- i.e., years)
I agree about the names, it is very similar to MIN_LOC and MAX_LOC in fortran 90.
However, I find difficult to define some algorithm able to do the same things.
Diego
They do not fit with the rest of the predefined operations (which operate on a single basic type) and can easily be implemented as user defined operations and get the same performance. Add to that the fixed number of tuple types and the fact that some of them are non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could kill them in MPI-4 I would.
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
The problem is minloc and maxloc need to go away. better to use a custom op.
You will need to create a special variable that holds 2 entries, one for the max operation (with whatever type you need) and an int for the rank of the process. The MAXLOC is described on the OMPI man page [1] and you can find an example on how to use it on the MPI Forum [2].
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
[2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in a vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to the group and ask whoever owns it to kindly reply back with its rank.
Ray
Hi,
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs. Is it possible?
I have set-up a strange procedure but it works only when all the CPUs has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Squyres
***@cisco.com
Jeff Hammond
2018-08-10 17:52:15 UTC
Permalink
This thread is a perfect illustration of why MPI Forum participants should
not flippantly discuss feature deprecation in discussion with users. Users
who are not familiar with the MPI Forum process are not able to evaluate
whether such proposals are serious or have any hope of succeeding and
therefore may be unnecessarily worried about their code breaking in the
future, when that future is 5 to infinity years away.

If someone wants to deprecate MPI_{MIN,MAX}LOC, they should start that
discussion on https://github.com/mpi-forum/mpi-issues/issues or
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.

Jeff

On Fri, Aug 10, 2018 at 10:27 AM, Jeff Squyres (jsquyres) via users <
Post by Jeff Squyres (jsquyres) via users
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4,
meaning that they'll likely continue to be in MPI for at least another 10
years. :-)
(And even if they did get killed in MPI-4, implementations like Open MPI
would continue to keep them in our implementations for quite a while --
i.e., years)
Post by Diego Avesani
I agree about the names, it is very similar to MIN_LOC and MAX_LOC in
fortran 90.
Post by Diego Avesani
However, I find difficult to define some algorithm able to do the same
things.
Post by Diego Avesani
Diego
On 10 August 2018 at 19:03, Nathan Hjelm via users <
They do not fit with the rest of the predefined operations (which
operate on a single basic type) and can easily be implemented as user
defined operations and get the same performance. Add to that the fixed
number of tuple types and the fact that some of them are non-contiguous
(MPI_SHORT_INT) plus the terrible names. If I could kill them in MPI-4 I
would.
Post by Diego Avesani
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
On 10 August 2018 at 17:39, Nathan Hjelm via users <
The problem is minloc and maxloc need to go away. better to use a
custom op.
Post by Diego Avesani
Post by Diego Avesani
Post by George Bosilca
You will need to create a special variable that holds 2 entries, one
for the max operation (with whatever type you need) and an int for the rank
of the process. The MAXLOC is described on the OMPI man page [1] and you
can find an example on how to use it on the MPI Forum [2].
Post by Diego Avesani
Post by Diego Avesani
Post by George Bosilca
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
[2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani <
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum
in a vector and not the CPU to which the valur belongs to.
Post by Diego Avesani
Post by Diego Avesani
Post by George Bosilca
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to the
group and ask whoever owns it to kindly reply back with its rank.
Post by Diego Avesani
Post by Diego Avesani
Post by George Bosilca
Ray
Hi,
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is
quite simple,
Post by Diego Avesani
Post by Diego Avesani
Post by George Bosilca
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX,
MPI_MASTER_COMM, MPIworld%iErr)
Post by Diego Avesani
Post by Diego Avesani
Post by George Bosilca
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs. Is
it possible?
Post by Diego Avesani
Post by Diego Avesani
Post by George Bosilca
I have set-up a strange procedure but it works only when all the CPUs
has different values but fails when two of then has the same eff value.
Post by Diego Avesani
Post by Diego Avesani
Post by George Bosilca
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
***@gmail.com
http://jeffhammond.github.io/
Gus Correa
2018-08-10 18:11:14 UTC
Permalink
Hmmm ... no, no, no!
Keep it secret why!?!?

Diego Avesani's questions and questioning
may have saved us users from getting a
useful feature deprecated in the name of code elegance.
Code elegance may be very cherished by developers,
but it is not necessarily helpful to users,
specially if it strips off useful functionality.

My cheap 2 cents from a user.
Gus Correa
Post by Jeff Hammond
This thread is a perfect illustration of why MPI Forum participants
should not flippantly discuss feature deprecation in discussion with
users.  Users who are not familiar with the MPI Forum process are not
able to evaluate whether such proposals are serious or have any hope of
succeeding and therefore may be unnecessarily worried about their code
breaking in the future, when that future is 5 to infinity years away.
If someone wants to deprecate MPI_{MIN,MAX}LOC, they should start that
discussion on https://github.com/mpi-forum/mpi-issues/issues or
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.
Jeff
On Fri, Aug 10, 2018 at 10:27 AM, Jeff Squyres (jsquyres) via users
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in
MPI-4, meaning that they'll likely continue to be in MPI for at
least another 10 years.  :-)
(And even if they did get killed in MPI-4, implementations like Open
MPI would continue to keep them in our implementations for quite a
while -- i.e., years)
On Aug 10, 2018, at 1:13 PM, Diego Avesani
I agree about the names, it is very similar to MIN_LOC and
MAX_LOC in fortran 90.
However, I find difficult to define some algorithm able to do the
same things.
Diego
On 10 August 2018 at 19:03, Nathan Hjelm via users
They do not fit with the rest of the predefined operations (which
operate on a single basic type) and can easily be implemented as
user defined operations and get the same performance. Add to that
the fixed number of tuple types and the fact that some of them are
non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could
kill them in MPI-4 I would.
On Aug 10, 2018, at 9:47 AM, Diego Avesani
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they  go away?
it seems working pretty well.
thanks
Diego
On 10 August 2018 at 17:39, Nathan Hjelm via users
The problem is minloc and maxloc need to go away. better to use
a custom op.
Post by Diego Avesani
Post by George Bosilca
You will need to create a special variable that holds 2
entries, one for the max operation (with whatever type you need) and
an int for the rank of the process. The MAXLOC is described on the
OMPI man page [1] and you can find an example on how to use it on
the MPI Forum [2].
Post by Diego Avesani
Post by George Bosilca
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
<https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php>
Post by Diego Avesani
Post by George Bosilca
[2]
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
<https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html>
Post by Diego Avesani
Post by George Bosilca
On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani
  Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
On 10 August 2018 at 17:19, Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the
maximum in a vector and not the CPU to which the valur belongs to.
Post by Diego Avesani
Post by George Bosilca
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to
the group and ask whoever owns it to kindly reply back with its rank.
Post by Diego Avesani
Post by George Bosilca
      Ray
Hi,
Am 10.08.2018 um 16:39 schrieb Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for
me is quite simple,
Post by Diego Avesani
Post by George Bosilca
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Post by Diego Avesani
Post by George Bosilca
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value
belongs. Is it possible?
Post by Diego Avesani
Post by George Bosilca
I have set-up a strange procedure but it works only when all
the CPUs has different values but fails when two of then has the
same eff value.
Post by Diego Avesani
Post by George Bosilca
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
--
Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
--
Jeff Hammond
http://jeffhammond.github.io/
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Jeff Squyres (jsquyres) via users
2018-08-10 18:19:21 UTC
Permalink
Jeff H. was referring to Nathan's offhand remark about his desire to kill the MPI_MINLOC / MPI_MAXLOC operations. I think Jeff H's point is that this is just Nathan's opinion -- as far as I know, there is no proposal in front of the MPI Forum to actively deprecate MPI_MINLOC or MPI_MAXLOC. Speaking this opinion on a public mailing list with no other context created a bit of confusion.

The Forum is quite transparent in what it does -- e.g., anyone is allowed to come to its meetings and hear (and participate in!) all the deliberations, etc. But speaking off-the-cuff about something that *might* happen *someday* that would have impact on real users and real codes -- that might have caused a little needless confusion.
Post by Gus Correa
Hmmm ... no, no, no!
Keep it secret why!?!?
Diego Avesani's questions and questioning
may have saved us users from getting a
useful feature deprecated in the name of code elegance.
Code elegance may be very cherished by developers,
but it is not necessarily helpful to users,
specially if it strips off useful functionality.
My cheap 2 cents from a user.
Gus Correa
This thread is a perfect illustration of why MPI Forum participants should not flippantly discuss feature deprecation in discussion with users. Users who are not familiar with the MPI Forum process are not able to evaluate whether such proposals are serious or have any hope of succeeding and therefore may be unnecessarily worried about their code breaking in the future, when that future is 5 to infinity years away.
If someone wants to deprecate MPI_{MIN,MAX}LOC, they should start that discussion on https://github.com/mpi-forum/mpi-issues/issues or https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.
Jeff
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in
MPI-4, meaning that they'll likely continue to be in MPI for at
least another 10 years. :-)
(And even if they did get killed in MPI-4, implementations like Open
MPI would continue to keep them in our implementations for quite a
while -- i.e., years)
On Aug 10, 2018, at 1:13 PM, Diego Avesani
I agree about the names, it is very similar to MIN_LOC and
MAX_LOC in fortran 90.
However, I find difficult to define some algorithm able to do the
same things.
Diego
On 10 August 2018 at 19:03, Nathan Hjelm via users
They do not fit with the rest of the predefined operations (which
operate on a single basic type) and can easily be implemented as
user defined operations and get the same performance. Add to that
the fixed number of tuple types and the fact that some of them are
non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could
kill them in MPI-4 I would.
On Aug 10, 2018, at 9:47 AM, Diego Avesani
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
On 10 August 2018 at 17:39, Nathan Hjelm via users
The problem is minloc and maxloc need to go away. better to use
a custom op.
Post by Diego Avesani
Post by George Bosilca
You will need to create a special variable that holds 2
entries, one for the max operation (with whatever type you need) and
an int for the rank of the process. The MAXLOC is described on the
OMPI man page [1] and you can find an example on how to use it on
the MPI Forum [2].
Post by Diego Avesani
Post by George Bosilca
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
<https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php>
Post by Diego Avesani
Post by George Bosilca
[2]
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
<https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html>
Post by Diego Avesani
Post by George Bosilca
On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
On 10 August 2018 at 17:19, Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the
maximum in a vector and not the CPU to which the valur belongs to.
Post by Diego Avesani
Post by George Bosilca
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to
the group and ask whoever owns it to kindly reply back with its rank.
Post by Diego Avesani
Post by George Bosilca
Ray
Hi,
Am 10.08.2018 um 16:39 schrieb Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for
me is quite simple,
Post by Diego Avesani
Post by George Bosilca
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Post by Diego Avesani
Post by George Bosilca
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value
belongs. Is it possible?
Post by Diego Avesani
Post by George Bosilca
I have set-up a strange procedure but it works only when all
the CPUs has different values but fails when two of then has the
same eff value.
Post by Diego Avesani
Post by George Bosilca
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
-- Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
--
Jeff Hammond
http://jeffhammond.github.io/
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Squyres
***@cisco.com
Gus Correa
2018-08-10 18:42:58 UTC
Permalink
Hi Jeff S.

OK, then I misunderstood Jeff H.
Sorry about that, Jeff H..

Nevertheless, Diego Avesani has certainly a point.
And it is the point of view of an user,
something that hopefully matters.
I'd add to Diego's arguments
that maxloc, minloc, and friends are part of
Fortran, Matlab, etc.
A science/engineering programmer expects it to be available,
not to have to be reinvented from scratch,
both on the baseline language, as well as in MPI.

In addition,
MPI developers cannot expect the typical MPI user to keep track
of what goes on in the MPI Forum.
I certainly don't have either the skill or the time for it.
However, developers can make an effort to listen to the chatter in the
various MPI user's list, before making any decision of stripping off
functionality, specially such a basic one as minloc, maxloc.

My two cents from a pedestrian MPI user,
who thinks minloc and maxloc are great,
knows nothing about the MPI Forum protocols and activities,
but hopes the Forum pays attention to users' needs.

Gus Correa

PS - Jeff S.: Please, bring Diego's request to the Forum! Add my vote
too. :)
Post by Jeff Squyres (jsquyres) via users
Jeff H. was referring to Nathan's offhand remark about his desire to kill the MPI_MINLOC / MPI_MAXLOC operations. I think Jeff H's point is that this is just Nathan's opinion -- as far as I know, there is no proposal in front of the MPI Forum to actively deprecate MPI_MINLOC or MPI_MAXLOC. Speaking this opinion on a public mailing list with no other context created a bit of confusion.
The Forum is quite transparent in what it does -- e.g., anyone is allowed to come to its meetings and hear (and participate in!) all the deliberations, etc. But speaking off-the-cuff about something that *might* happen *someday* that would have impact on real users and real codes -- that might have caused a little needless confusion.
Post by Gus Correa
Hmmm ... no, no, no!
Keep it secret why!?!?
Diego Avesani's questions and questioning
may have saved us users from getting a
useful feature deprecated in the name of code elegance.
Code elegance may be very cherished by developers,
but it is not necessarily helpful to users,
specially if it strips off useful functionality.
My cheap 2 cents from a user.
Gus Correa
This thread is a perfect illustration of why MPI Forum participants should not flippantly discuss feature deprecation in discussion with users. Users who are not familiar with the MPI Forum process are not able to evaluate whether such proposals are serious or have any hope of succeeding and therefore may be unnecessarily worried about their code breaking in the future, when that future is 5 to infinity years away.
If someone wants to deprecate MPI_{MIN,MAX}LOC, they should start that discussion on https://github.com/mpi-forum/mpi-issues/issues or https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.
Jeff
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in
MPI-4, meaning that they'll likely continue to be in MPI for at
least another 10 years. :-)
(And even if they did get killed in MPI-4, implementations like Open
MPI would continue to keep them in our implementations for quite a
while -- i.e., years)
On Aug 10, 2018, at 1:13 PM, Diego Avesani
I agree about the names, it is very similar to MIN_LOC and
MAX_LOC in fortran 90.
However, I find difficult to define some algorithm able to do the
same things.
Diego
On 10 August 2018 at 19:03, Nathan Hjelm via users
They do not fit with the rest of the predefined operations (which
operate on a single basic type) and can easily be implemented as
user defined operations and get the same performance. Add to that
the fixed number of tuple types and the fact that some of them are
non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could
kill them in MPI-4 I would.
On Aug 10, 2018, at 9:47 AM, Diego Avesani
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
On 10 August 2018 at 17:39, Nathan Hjelm via users
The problem is minloc and maxloc need to go away. better to use
a custom op.
Post by Diego Avesani
Post by George Bosilca
You will need to create a special variable that holds 2
entries, one for the max operation (with whatever type you need) and
an int for the rank of the process. The MAXLOC is described on the
OMPI man page [1] and you can find an example on how to use it on
the MPI Forum [2].
Post by Diego Avesani
Post by George Bosilca
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
<https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php>
Post by Diego Avesani
Post by George Bosilca
[2]
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
<https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html>
Post by Diego Avesani
Post by George Bosilca
On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
On 10 August 2018 at 17:19, Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the
maximum in a vector and not the CPU to which the valur belongs to.
Post by Diego Avesani
Post by George Bosilca
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to
the group and ask whoever owns it to kindly reply back with its rank.
Post by Diego Avesani
Post by George Bosilca
Ray
Hi,
Am 10.08.2018 um 16:39 schrieb Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for
me is quite simple,
Post by Diego Avesani
Post by George Bosilca
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Post by Diego Avesani
Post by George Bosilca
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value
belongs. Is it possible?
Post by Diego Avesani
Post by George Bosilca
I have set-up a strange procedure but it works only when all
the CPUs has different values but fails when two of then has the
same eff value.
Post by Diego Avesani
Post by George Bosilca
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
-- Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
--
Jeff Hammond
http://jeffhammond.github.io/
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Jeff Squyres (jsquyres) via users
2018-08-10 18:47:15 UTC
Permalink
I think that your reasons are very valid, and probably why the Forum a) invented MPI_MINLOC/MAXLOC in the first place, and b) why no one has put forth a proposal to get rid of them.

:-)
Post by Gus Correa
Hi Jeff S.
OK, then I misunderstood Jeff H.
Sorry about that, Jeff H..
Nevertheless, Diego Avesani has certainly a point.
And it is the point of view of an user,
something that hopefully matters.
I'd add to Diego's arguments
that maxloc, minloc, and friends are part of
Fortran, Matlab, etc.
A science/engineering programmer expects it to be available,
not to have to be reinvented from scratch,
both on the baseline language, as well as in MPI.
In addition,
MPI developers cannot expect the typical MPI user to keep track
of what goes on in the MPI Forum.
I certainly don't have either the skill or the time for it.
However, developers can make an effort to listen to the chatter in the
various MPI user's list, before making any decision of stripping off functionality, specially such a basic one as minloc, maxloc.
My two cents from a pedestrian MPI user,
who thinks minloc and maxloc are great,
knows nothing about the MPI Forum protocols and activities,
but hopes the Forum pays attention to users' needs.
Gus Correa
PS - Jeff S.: Please, bring Diego's request to the Forum! Add my vote too. :)
Post by Jeff Squyres (jsquyres) via users
Jeff H. was referring to Nathan's offhand remark about his desire to kill the MPI_MINLOC / MPI_MAXLOC operations. I think Jeff H's point is that this is just Nathan's opinion -- as far as I know, there is no proposal in front of the MPI Forum to actively deprecate MPI_MINLOC or MPI_MAXLOC. Speaking this opinion on a public mailing list with no other context created a bit of confusion.
The Forum is quite transparent in what it does -- e.g., anyone is allowed to come to its meetings and hear (and participate in!) all the deliberations, etc. But speaking off-the-cuff about something that *might* happen *someday* that would have impact on real users and real codes -- that might have caused a little needless confusion.
Post by Gus Correa
Hmmm ... no, no, no!
Keep it secret why!?!?
Diego Avesani's questions and questioning
may have saved us users from getting a
useful feature deprecated in the name of code elegance.
Code elegance may be very cherished by developers,
but it is not necessarily helpful to users,
specially if it strips off useful functionality.
My cheap 2 cents from a user.
Gus Correa
This thread is a perfect illustration of why MPI Forum participants should not flippantly discuss feature deprecation in discussion with users. Users who are not familiar with the MPI Forum process are not able to evaluate whether such proposals are serious or have any hope of succeeding and therefore may be unnecessarily worried about their code breaking in the future, when that future is 5 to infinity years away.
If someone wants to deprecate MPI_{MIN,MAX}LOC, they should start that discussion on https://github.com/mpi-forum/mpi-issues/issues or https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.
Jeff
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in
MPI-4, meaning that they'll likely continue to be in MPI for at
least another 10 years. :-)
(And even if they did get killed in MPI-4, implementations like Open
MPI would continue to keep them in our implementations for quite a
while -- i.e., years)
On Aug 10, 2018, at 1:13 PM, Diego Avesani
I agree about the names, it is very similar to MIN_LOC and
MAX_LOC in fortran 90.
However, I find difficult to define some algorithm able to do the
same things.
Diego
On 10 August 2018 at 19:03, Nathan Hjelm via users
They do not fit with the rest of the predefined operations (which
operate on a single basic type) and can easily be implemented as
user defined operations and get the same performance. Add to that
the fixed number of tuple types and the fact that some of them are
non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could
kill them in MPI-4 I would.
On Aug 10, 2018, at 9:47 AM, Diego Avesani
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
On 10 August 2018 at 17:39, Nathan Hjelm via users
The problem is minloc and maxloc need to go away. better to use
a custom op.
Post by Diego Avesani
Post by George Bosilca
You will need to create a special variable that holds 2
entries, one for the max operation (with whatever type you need) and
an int for the rank of the process. The MAXLOC is described on the
OMPI man page [1] and you can find an example on how to use it on
the MPI Forum [2].
Post by Diego Avesani
Post by George Bosilca
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
<https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php>
Post by Diego Avesani
Post by George Bosilca
[2]
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
<https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html>
Post by Diego Avesani
Post by George Bosilca
On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
On 10 August 2018 at 17:19, Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the
maximum in a vector and not the CPU to which the valur belongs to.
Post by Diego Avesani
Post by George Bosilca
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to
the group and ask whoever owns it to kindly reply back with its rank.
Post by Diego Avesani
Post by George Bosilca
Ray
Hi,
Am 10.08.2018 um 16:39 schrieb Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for
me is quite simple,
Post by Diego Avesani
Post by George Bosilca
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Post by Diego Avesani
Post by George Bosilca
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value
belongs. Is it possible?
Post by Diego Avesani
Post by George Bosilca
I have set-up a strange procedure but it works only when all
the CPUs has different values but fails when two of then has the
same eff value.
Post by Diego Avesani
Post by George Bosilca
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
-- Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
--
Jeff Hammond
http://jeffhammond.github.io/
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Squyres
***@cisco.com
Jeff Hammond
2018-08-12 00:02:37 UTC
Permalink
The MPI Forum email lists and GitHub are not secret. Please feel free to
follow the GitHub project linked below and/or sign up for the MPI Forum
email lists if you are interested in the evolution of the MPI standard.

What MPI Forum members should avoid is creating FUD about MPI by
speculating about the removal of useful features. There is plenty of time
to have those debates in both public and private after formal proposals are
made.

Jeff
Post by Gus Correa
Hmmm ... no, no, no!
Keep it secret why!?!?
Diego Avesani's questions and questioning
may have saved us users from getting a
useful feature deprecated in the name of code elegance.
Code elegance may be very cherished by developers,
but it is not necessarily helpful to users,
specially if it strips off useful functionality.
My cheap 2 cents from a user.
Gus Correa
Post by Jeff Hammond
This thread is a perfect illustration of why MPI Forum participants
should not flippantly discuss feature deprecation in discussion with
users. Users who are not familiar with the MPI Forum process are not able
to evaluate whether such proposals are serious or have any hope of
succeeding and therefore may be unnecessarily worried about their code
breaking in the future, when that future is 5 to infinity years away.
If someone wants to deprecate MPI_{MIN,MAX}LOC, they should start that
discussion on https://github.com/mpi-forum/mpi-issues/issues or
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.
Jeff
On Fri, Aug 10, 2018 at 10:27 AM, Jeff Squyres (jsquyres) via users <
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in
MPI-4, meaning that they'll likely continue to be in MPI for at
least another 10 years. :-)
(And even if they did get killed in MPI-4, implementations like Open
MPI would continue to keep them in our implementations for quite a
while -- i.e., years)
On Aug 10, 2018, at 1:13 PM, Diego Avesani
I agree about the names, it is very similar to MIN_LOC and
MAX_LOC in fortran 90.
However, I find difficult to define some algorithm able to do the
same things.
Diego
On 10 August 2018 at 19:03, Nathan Hjelm via users
They do not fit with the rest of the predefined operations (which
operate on a single basic type) and can easily be implemented as
user defined operations and get the same performance. Add to that
the fixed number of tuple types and the fact that some of them are
non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could
kill them in MPI-4 I would.
On Aug 10, 2018, at 9:47 AM, Diego Avesani
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
On 10 August 2018 at 17:39, Nathan Hjelm via users
The problem is minloc and maxloc need to go away. better to use
a custom op.
Post by Diego Avesani
Post by George Bosilca
You will need to create a special variable that holds 2
entries, one for the max operation (with whatever type you need) and
an int for the rank of the process. The MAXLOC is described on the
OMPI man page [1] and you can find an example on how to use it on
the MPI Forum [2].
Post by Diego Avesani
Post by George Bosilca
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
<https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php>
Post by Diego Avesani
Post by George Bosilca
[2]
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
<https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html>
Post by Diego Avesani
Post by George Bosilca
On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
On 10 August 2018 at 17:19, Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the
maximum in a vector and not the CPU to which the valur belongs to.
Post by Diego Avesani
Post by George Bosilca
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to
the group and ask whoever owns it to kindly reply back with its rank.
Post by Diego Avesani
Post by George Bosilca
Ray
Hi,
Am 10.08.2018 um 16:39 schrieb Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for
me is quite simple,
Post by Diego Avesani
Post by George Bosilca
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Post by Diego Avesani
Post by George Bosilca
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value
belongs. Is it possible?
Post by Diego Avesani
Post by George Bosilca
I have set-up a strange procedure but it works only when all
the CPUs has different values but fails when two of then has the
same eff value.
Post by Diego Avesani
Post by George Bosilca
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
-- Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
--
Jeff Hammond
http://jeffhammond.github.io/
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
***@gmail.com
http://jeffhammond.github.io/
Diego Avesani
2018-08-12 18:21:02 UTC
Permalink
Dear all,
Thanks for the talk,
it was amazing and, as always, I have learned a lot.



Diego
Post by Jeff Hammond
The MPI Forum email lists and GitHub are not secret. Please feel free to
follow the GitHub project linked below and/or sign up for the MPI Forum
email lists if you are interested in the evolution of the MPI standard.
What MPI Forum members should avoid is creating FUD about MPI by
speculating about the removal of useful features. There is plenty of time
to have those debates in both public and private after formal proposals are
made.
Jeff
Post by Gus Correa
Hmmm ... no, no, no!
Keep it secret why!?!?
Diego Avesani's questions and questioning
may have saved us users from getting a
useful feature deprecated in the name of code elegance.
Code elegance may be very cherished by developers,
but it is not necessarily helpful to users,
specially if it strips off useful functionality.
My cheap 2 cents from a user.
Gus Correa
Post by Jeff Hammond
This thread is a perfect illustration of why MPI Forum participants
should not flippantly discuss feature deprecation in discussion with
users. Users who are not familiar with the MPI Forum process are not able
to evaluate whether such proposals are serious or have any hope of
succeeding and therefore may be unnecessarily worried about their code
breaking in the future, when that future is 5 to infinity years away.
If someone wants to deprecate MPI_{MIN,MAX}LOC, they should start that
discussion on https://github.com/mpi-forum/mpi-issues/issues or
https://lists.mpi-forum.org/mailman/listinfo/mpiwg-coll.
Jeff
On Fri, Aug 10, 2018 at 10:27 AM, Jeff Squyres (jsquyres) via users <
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in
MPI-4, meaning that they'll likely continue to be in MPI for at
least another 10 years. :-)
(And even if they did get killed in MPI-4, implementations like Open
MPI would continue to keep them in our implementations for quite a
while -- i.e., years)
On Aug 10, 2018, at 1:13 PM, Diego Avesani
I agree about the names, it is very similar to MIN_LOC and
MAX_LOC in fortran 90.
However, I find difficult to define some algorithm able to do the
same things.
Diego
On 10 August 2018 at 19:03, Nathan Hjelm via users
They do not fit with the rest of the predefined operations (which
operate on a single basic type) and can easily be implemented as
user defined operations and get the same performance. Add to that
the fixed number of tuple types and the fact that some of them are
non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could
kill them in MPI-4 I would.
On Aug 10, 2018, at 9:47 AM, Diego Avesani
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
On 10 August 2018 at 17:39, Nathan Hjelm via users
The problem is minloc and maxloc need to go away. better to use
a custom op.
Post by Diego Avesani
Post by George Bosilca
You will need to create a special variable that holds 2
entries, one for the max operation (with whatever type you need) and
an int for the rank of the process. The MAXLOC is described on the
OMPI man page [1] and you can find an example on how to use it on
the MPI Forum [2].
Post by Diego Avesani
Post by George Bosilca
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
<https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php>
Post by Diego Avesani
Post by George Bosilca
[2]
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
<https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html>
Post by Diego Avesani
Post by George Bosilca
On Fri, Aug 10, 2018 at 11:25 AM Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
On 10 August 2018 at 17:19, Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the
maximum in a vector and not the CPU to which the valur belongs to.
Post by Diego Avesani
Post by George Bosilca
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to
the group and ask whoever owns it to kindly reply back with its rank.
Post by Diego Avesani
Post by George Bosilca
Ray
Hi,
Am 10.08.2018 um 16:39 schrieb Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for
me is quite simple,
Post by Diego Avesani
Post by George Bosilca
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION,
MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Post by Diego Avesani
Post by George Bosilca
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value
belongs. Is it possible?
Post by Diego Avesani
Post by George Bosilca
I have set-up a strange procedure but it works only when all
the CPUs has different values but fails when two of then has the
same eff value.
Post by Diego Avesani
Post by George Bosilca
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
Post by George Bosilca
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
Post by Diego Avesani
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
-- Jeff Squyres
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
--
Jeff Hammond
http://jeffhammond.github.io/
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
http://jeffhammond.github.io/
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Gus Correa
2018-08-10 18:15:28 UTC
Permalink
Post by Jeff Squyres (jsquyres) via users
It is unlikely that MPI_MINLOC and MPI_MAXLOC will go away any time soon.
As far as I know, Nathan hasn't advanced a proposal to kill them in MPI-4, meaning that they'll likely continue to be in MPI for at least another 10 years. :-)
(And even if they did get killed in MPI-4, implementations like Open MPI would continue to keep them in our implementations for quite a while -- i.e., years)
Yeah!
Two thumbs up!
Post by Jeff Squyres (jsquyres) via users
I agree about the names, it is very similar to MIN_LOC and MAX_LOC in fortran 90.
However, I find difficult to define some algorithm able to do the same things.
Diego
They do not fit with the rest of the predefined operations (which operate on a single basic type) and can easily be implemented as user defined operations and get the same performance. Add to that the fixed number of tuple types and the fact that some of them are non-contiguous (MPI_SHORT_INT) plus the terrible names. If I could kill them in MPI-4 I would.
Post by Diego Avesani
Dear all,
I have just implemented MAXLOC, why should they go away?
it seems working pretty well.
thanks
Diego
The problem is minloc and maxloc need to go away. better to use a custom op.
You will need to create a special variable that holds 2 entries, one for the max operation (with whatever type you need) and an int for the rank of the process. The MAXLOC is described on the OMPI man page [1] and you can find an example on how to use it on the MPI Forum [2].
George.
[1] https://www.open-mpi.org/doc/v2.0/man3/MPI_Reduce.3.php
[2] https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Have I understood correctly?
thanks
Diego
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in a vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to the group and ask whoever owns it to kindly reply back with its rank.
Ray
Hi,
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs. Is it possible?
I have set-up a strange procedure but it works only when all the CPUs has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Reuti
2018-08-10 15:41:29 UTC
Permalink
Post by Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
Yes, I thought of this:

https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html

-- Reuti
Post by Diego Avesani
Have I understood correctly?
thanks
Diego
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in a vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to the group and ask whoever owns it to kindly reply back with its rank.
Ray
Hi,
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs. Is it possible?
I have set-up a strange procedure but it works only when all the CPUs has different values but fails when two of then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Diego Avesani
2018-08-10 16:46:23 UTC
Permalink
Dear all,
I did it, but I am still afraid about Nathan concern.

What do you think?

thanks again

Diego
Post by George Bosilca
Post by Diego Avesani
Dear all,
I have probably understood.
The trick is to use a real vector and to memorize also the rank.
https://www.mpi-forum.org/docs/mpi-1.1/mpi-11-html/node79.html
-- Reuti
Post by Diego Avesani
Have I understood correctly?
thanks
Diego
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum in
a vector and not the CPU to which the valur belongs to.
Post by Diego Avesani
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to the
group and ask whoever owns it to kindly reply back with its rank.
Post by Diego Avesani
Ray
Hi,
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This for me is
quite simple,
Post by Diego Avesani
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1, MPI_DOUBLE_PRECISION, MPI_MAX,
MPI_MASTER_COMM, MPIworld%iErr)
Post by Diego Avesani
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value belongs. Is
it possible?
Post by Diego Avesani
I have set-up a strange procedure but it works only when all the CPUs
has different values but fails when two of then has the same eff value.
Post by Diego Avesani
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Ray Sheppard
2018-08-10 16:46:55 UTC
Permalink
Hi Diego,
  if they are float/reals, the error (overflow) bits will likely make
them unique.  If you are looking at integers, I would use isends and
just capture the first one.  You could make a little round robin and
poll everyone, saving the ranks that match, but if you are using
hundreds/thousands of ranks, that could slow everything down a little.
       Ray
Post by Diego Avesani
Deal all,
I do not understand how MPI_MINLOC works. it seem locate the maximum
in a vector and not the CPU to which the valur belongs to.
@ray: and if two has the same value?
thanks
Diego
As a dumb scientist, I would just bcast the value I get back to
the group and ask whoever owns it to kindly reply back with its rank.
     Ray
Hi,
Am 10.08.2018 um 16:39 schrieb Diego Avesani
Dear all,
In my parallel program each CPU compute a value, let's say eff.
First of all, I would like to know the maximum value. This
for me is quite simple,
CALL MPI_ALLREDUCE(eff, effmaxWorld, 1,
MPI_DOUBLE_PRECISION, MPI_MAX, MPI_MASTER_COMM, MPIworld%iErr)
Would MPI_MAXLOC be sufficient?
-- Reuti
However, I would like also to know to which CPU that value
belongs. Is it possible?
I have set-up a strange procedure but it works only when
all the CPUs has different values but fails when two of
then has the same eff value.
Is there any intrinsic MPI procedure?
in anternative,
do you have some idea?
really, really thanks.
Diego
Diego
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
<https://lists.open-mpi.org/mailman/listinfo/users>
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
George Reeke
2018-08-10 15:27:02 UTC
Permalink
Post by Ray Sheppard
As a dumb scientist, I would just bcast the value I get back to the
group and ask whoever owns it to kindly reply back with its rank.
Ray
Depends how many times one needs to run this--your solution involves
quite a bit of extra communication. It should be easy to write a
little function that does a binary reduction (loop over bits in the
rank numbers) where each node sends its value and rank to its neighbor,
each recipient picks the larger and in the next round of the reduction
(the nodes with ones in the last dimension drop out) sends that value
and rank to the neighbor in the next dimension, until the value and rank
of the highest value all end up at rank 0 (or wherever you program it).
George Reeke
Loading...