Discussion:
[OMPI users] MPI cartesian grid : cumulate a scalar value through the procs of a given axis of the grid
Pierre Gubernatis
2018-05-02 09:15:09 UTC
Permalink
Hello all...

I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller cubes...),
and I have communicators to address the procs, as for example a comm along
each of the 3 axes I,J,K, or along a plane IK,JK,IJ, etc..).

*I need to cumulate a scalar value (SCAL) through the procs which belong to
a given axis* (let's say the K axis, defined by I=J=0).

Precisely, the origin proc 0-0-0 has a given value for SCAL (say SCAL000).
I need to update the 'following' proc (0-0-1) by doing SCAL = SCAL +
SCAL000, and I need to *propagate* this updating along the K axis. At the
end, the last proc of the axis should have the total sum of SCAL over the
axis. (and of course, at a given rank k along the axis, the SCAL value =
sum over 0,1, K of SCAL)

Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get deadlocks
that prove I don't handle this correctly...)
Thank you in any case.
Peter Kjellström
2018-05-02 11:56:40 UTC
Permalink
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?

There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
Something like (simplified psuedo code):

if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp

if (not_last_along_K)
MPI_SEND(SCAL, next)

/Peter K
John Hearns via users
2018-05-02 12:08:06 UTC
Permalink
Peter, how large are your models, ie how many cells in each direction?
Something inside of me is shouting that if the models are small enough then
MPI is not the way here.
Assuming use of a Xeon processor there should be some AVX instructions
which can do this.

This is rather out of date, but is it helpful?
ttps://
www.quora.com/Is-there-an-SIMD-architecture-that-supports-horizontal-cumulative-sum-Prefix-sum-as-a-single-instruction

https://software.intel.com/sites/landingpage/IntrinsicsGuide/
Post by Peter Kjellström
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?
There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
John Hearns via users
2018-05-02 12:11:33 UTC
Permalink
Also my inner voice is shouting that there must be an easy way to express
this in Julia
https://discourse.julialang.org/t/apply-reduction-along-specific-axes/3301/16

OK, these are not the same stepwise cumulative operatiosn that you want,
but the idea is close.


ps. Note to self - stop listening to the voices.
Post by John Hearns via users
Peter, how large are your models, ie how many cells in each direction?
Something inside of me is shouting that if the models are small enough
then MPI is not the way here.
Assuming use of a Xeon processor there should be some AVX instructions
which can do this.
This is rather out of date, but is it helpful?
ttps://www.quora.com/Is-there-an-SIMD-architecture-that-
supports-horizontal-cumulative-sum-Prefix-sum-as-a-single-instruction
https://software.intel.com/sites/landingpage/IntrinsicsGuide/
Post by Peter Kjellström
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?
There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
John Hearns via users
2018-05-02 12:19:10 UTC
Permalink
Pierre, I may not be able to help you directly. But I had better stop
listening to the voices.
Mail me off list please.

This might do the trick using Julia
http://juliadb.org/latest/api/aggregation.html
Post by John Hearns via users
Also my inner voice is shouting that there must be an easy way to express
this in Julia
https://discourse.julialang.org/t/apply-reduction-along-
specific-axes/3301/16
OK, these are not the same stepwise cumulative operatiosn that you want,
but the idea is close.
ps. Note to self - stop listening to the voices.
Post by John Hearns via users
Peter, how large are your models, ie how many cells in each direction?
Something inside of me is shouting that if the models are small enough
then MPI is not the way here.
Assuming use of a Xeon processor there should be some AVX instructions
which can do this.
This is rather out of date, but is it helpful?
ttps://www.quora.com/Is-there-an-SIMD-architecture-that-supp
orts-horizontal-cumulative-sum-Prefix-sum-as-a-single-instruction
https://software.intel.com/sites/landingpage/IntrinsicsGuide/
Post by Peter Kjellström
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?
There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Nathan Hjelm
2018-05-02 12:29:42 UTC
Permalink
MPI_Reduce would do this. I would use MPI_Comm_split to make an axis comm then use reduce with the root being the last rank in the axis comm.
Also my inner voice is shouting that there must be an easy way to express this in Julia
https://discourse.julialang.org/t/apply-reduction-along-specific-axes/3301/16
OK, these are not the same stepwise cumulative operatiosn that you want, but the idea is close.
ps. Note to self - stop listening to the voices.
Post by John Hearns via users
Peter, how large are your models, ie how many cells in each direction?
Something inside of me is shouting that if the models are small enough then MPI is not the way here.
Assuming use of a Xeon processor there should be some AVX instructions which can do this.
This is rather out of date, but is it helpful?
ttps://www.quora.com/Is-there-an-SIMD-architecture-that-supports-horizontal-cumulative-sum-Prefix-sum-as-a-single-instruction
https://software.intel.com/sites/landingpage/IntrinsicsGuide/
Post by Peter Kjellström
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?
There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Nathan Hjelm
2018-05-02 12:32:16 UTC
Permalink
Hit send before I finished. If each proc along the axis needs the partial sum (ie proc j gets sum for i = 0 -> j-1 SCAL[j]) then MPI_Scan will do that.
Post by Nathan Hjelm
MPI_Reduce would do this. I would use MPI_Comm_split to make an axis comm then use reduce with the root being the last rank in the axis comm.
Also my inner voice is shouting that there must be an easy way to express this in Julia
https://discourse.julialang.org/t/apply-reduction-along-specific-axes/3301/16
OK, these are not the same stepwise cumulative operatiosn that you want, but the idea is close.
ps. Note to self - stop listening to the voices.
Post by John Hearns via users
Peter, how large are your models, ie how many cells in each direction?
Something inside of me is shouting that if the models are small enough then MPI is not the way here.
Assuming use of a Xeon processor there should be some AVX instructions which can do this.
This is rather out of date, but is it helpful?
ttps://www.quora.com/Is-there-an-SIMD-architecture-that-supports-horizontal-cumulative-sum-Prefix-sum-as-a-single-instruction
https://software.intel.com/sites/landingpage/IntrinsicsGuide/
Post by Peter Kjellström
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?
There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Peter Kjellström
2018-05-02 13:21:00 UTC
Permalink
On Wed, 02 May 2018 06:32:16 -0600
Post by Nathan Hjelm
Hit send before I finished. If each proc along the axis needs the
partial sum (ie proc j gets sum for i = 0 -> j-1 SCAL[j]) then
MPI_Scan will do that.
I must confess that I had forgotten about MPI_Scan when I replied to
the OP. In fact, I don't think I've ever used it... :-)

/Peter K
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Nathan Hjelm
2018-05-02 13:45:03 UTC
Permalink
MPI_Scan/MPI_Exscan are easy to forget but really useful.

-Nathan
Post by Peter Kjellström
On Wed, 02 May 2018 06:32:16 -0600
Post by Nathan Hjelm
Hit send before I finished. If each proc along the axis needs the
partial sum (ie proc j gets sum for i = 0 -> j-1 SCAL[j]) then
MPI_Scan will do that.
I must confess that I had forgotten about MPI_Scan when I replied to
the OP. In fact, I don't think I've ever used it... :-)
/Peter K
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Charles Antonelli
2018-05-02 12:39:30 UTC
Permalink
This seems to be crying out for MPI_Reduce.

Also in the previous solution given, I think you should do the MPI_Sends
first. Doing the MPI_Receives first forces serialization.

Regards,
Charles
Post by Peter Kjellström
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?
There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Peter Kjellström
2018-05-02 13:21:22 UTC
Permalink
On Wed, 2 May 2018 08:39:30 -0400
Post by Charles Antonelli
This seems to be crying out for MPI_Reduce.
No, the described reduction cannot be implemented with MPI_Reduce (note
the need for partial sums along the axis).
Post by Charles Antonelli
Also in the previous solution given, I think you should do the
MPI_Sends first. Doing the MPI_Receives first forces serialization.
It needs that. The first thing that happens is that the first rank
skips the recv and sends its SCAL to the 2nd process that just posted
recv.

Each process needs to complete the recv to know what to send (unless
you split it out into many more sends which is possible).

What's the best solution depends on if this part is performance
critical and how large K is.

/Peter K
Post by Charles Antonelli
Regards,
Charles
...
Post by Charles Antonelli
Post by Peter Kjellström
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
John Hearns via users
2018-05-02 13:33:44 UTC
Permalink
Peter is correct. We need to find out what K is.
But we may never find out https://en.wikipedia.org/wiki/The_Trial

It would be fun if we could get some real-world dimesnions here and some
real-world numbers.
What range of numbers are these also?
Post by Peter Kjellström
On Wed, 2 May 2018 08:39:30 -0400
Post by Charles Antonelli
This seems to be crying out for MPI_Reduce.
No, the described reduction cannot be implemented with MPI_Reduce (note
the need for partial sums along the axis).
Post by Charles Antonelli
Also in the previous solution given, I think you should do the
MPI_Sends first. Doing the MPI_Receives first forces serialization.
It needs that. The first thing that happens is that the first rank
skips the recv and sends its SCAL to the 2nd process that just posted
recv.
Each process needs to complete the recv to know what to send (unless
you split it out into many more sends which is possible).
What's the best solution depends on if this part is performance
critical and how large K is.
/Peter K
Post by Charles Antonelli
Regards,
Charles
...
Post by Charles Antonelli
Post by Peter Kjellström
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Pierre Gubernatis
2018-05-14 14:42:56 UTC
Permalink
Thank you to all of you for your answers (I was off up to now).

Actually my question was't well posed. I stated it more clearly in this
post, with the answer:

https://stackoverflow.com/questions/50130688/mpi-cartesian-grid-cumulate-a-scalar-value-through-the-procs-of-a-given-axis-o?noredirect=1#comment87286983_50130688

Thanks again.
Post by Peter Kjellström
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?
There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
Nathan Hjelm
2018-05-14 18:08:25 UTC
Permalink
Still looks to me like MPI_Scan is what you want. Just need three additional communicators (one for each direction). With a recurive doubling MPI_Scan inplementation it is O(log n) compared to O(n) in time.
Post by Pierre Gubernatis
Thank you to all of you for your answers (I was off up to now).
https://stackoverflow.com/questions/50130688/mpi-cartesian-grid-cumulate-a-scalar-value-through-the-procs-of-a-given-axis-o?noredirect=1#comment87286983_50130688
Thanks again.
Post by Peter Kjellström
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?
There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Pierre Gubernatis
2018-05-17 07:36:14 UTC
Permalink
yes, you are right..I didn't know MPI_scan and I finally jumped into, thanks
Post by Nathan Hjelm
Still looks to me like MPI_Scan is what you want. Just need three
additional communicators (one for each direction). With a recurive doubling
MPI_Scan inplementation it is O(log n) compared to O(n) in time.
On May 14, 2018, at 8:42 AM, Pierre Gubernatis <
Thank you to all of you for your answers (I was off up to now).
Actually my question was't well posed. I stated it more clearly in this
https://stackoverflow.com/questions/50130688/mpi-cartesian-grid-cumulate-a-scalar-value-through-the-procs-of-a-given-axis-o?noredirect=1#comment87286983_50130688
Thanks again.
Post by Peter Kjellström
On Wed, 2 May 2018 11:15:09 +0200
Post by Pierre Gubernatis
Hello all...
I am using a *cartesian grid* of processors which represents a spatial
domain (a cubic geometrical domain split into several smaller
cubes...), and I have communicators to address the procs, as for
example a comm along each of the 3 axes I,J,K, or along a plane
IK,JK,IJ, etc..).
*I need to cumulate a scalar value (SCAL) through the procs which
belong to a given axis* (let's say the K axis, defined by I=J=0).
Precisely, the origin proc 0-0-0 has a given value for SCAL (say
SCAL000). I need to update the 'following' proc (0-0-1) by doing SCAL
= SCAL + SCAL000, and I need to *propagate* this updating along the K
axis. At the end, the last proc of the axis should have the total sum
of SCAL over the axis. (and of course, at a given rank k along the
axis, the SCAL value = sum over 0,1, K of SCAL)
Please, do you see a way to do this ? I have tried many things (with
MPI_SENDRECV and by looping over the procs of the axis, but I get
deadlocks that prove I don't handle this correctly...)
Thank you in any case.
Why did you try SENDRECV? As far as I understand your description above
data only flows one direction (along K)?
There is no MPI collective to support the kind of reduction you
describe but it should not be hard to do using normal SEND and RECV.
if (not_first_along_K)
MPI_RECV(SCAL_tmp, previous)
SCAL += SCAL_tmp
if (not_last_along_K)
MPI_SEND(SCAL, next)
/Peter K
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://lists.open-mpi.org/mailman/listinfo/users
Loading...