Diego Avesani
2015-09-02 12:39:03 UTC
Dear all,
I have notice small difference between OPEN-MPI and intel MPI.
For example in MPI_ALLREDUCE in intel MPI is not allowed to use the same
variable in send and receiving Buff.
I have written my code in OPEN-MPI, but unfortunately I have to run in on a
intel-MPI cluster.
Now I have the following error:
*atal error in MPI_Isend: Invalid communicator, error stack:*
*MPI_Isend(158): MPI_Isend(buf=0x1dd27b0, count=1, INVALID DATATYPE,
dest=0, tag=0, comm=0x0, request=0x7fff9d7dd9f0) failed*
This is ho I create my type:
* CALL MPI_TYPE_VECTOR(1, Ncoeff_MLS, Ncoeff_MLS, MPI_DOUBLE_PRECISION,
coltype, MPIdata%iErr) *
* CALL MPI_TYPE_COMMIT(coltype, MPIdata%iErr)*
* !*
* CALL MPI_TYPE_VECTOR(1, nVar, nVar, coltype, MPI_WENO_TYPE,
MPIdata%iErr) *
* CALL MPI_TYPE_COMMIT(MPI_WENO_TYPE, MPIdata%iErr)*
do you believe that is here the problem?
Is also this the way how intel MPI create a datatype?
maybe I could also ask to intel MPI users
What do you think?
Diego
I have notice small difference between OPEN-MPI and intel MPI.
For example in MPI_ALLREDUCE in intel MPI is not allowed to use the same
variable in send and receiving Buff.
I have written my code in OPEN-MPI, but unfortunately I have to run in on a
intel-MPI cluster.
Now I have the following error:
*atal error in MPI_Isend: Invalid communicator, error stack:*
*MPI_Isend(158): MPI_Isend(buf=0x1dd27b0, count=1, INVALID DATATYPE,
dest=0, tag=0, comm=0x0, request=0x7fff9d7dd9f0) failed*
This is ho I create my type:
* CALL MPI_TYPE_VECTOR(1, Ncoeff_MLS, Ncoeff_MLS, MPI_DOUBLE_PRECISION,
coltype, MPIdata%iErr) *
* CALL MPI_TYPE_COMMIT(coltype, MPIdata%iErr)*
* !*
* CALL MPI_TYPE_VECTOR(1, nVar, nVar, coltype, MPI_WENO_TYPE,
MPIdata%iErr) *
* CALL MPI_TYPE_COMMIT(MPI_WENO_TYPE, MPIdata%iErr)*
do you believe that is here the problem?
Is also this the way how intel MPI create a datatype?
maybe I could also ask to intel MPI users
What do you think?
Diego