Discussion:
[OMPI users] Follow-up to Open MPI SC'16 BOF
Howard Pritchard
2016-11-22 21:49:32 UTC
Permalink
Hello Folks,

This is a followup to the question posed at the SC’16 Open MPI BOF: Would
the community prefer to have a v2.2.x limited feature but backwards
compatible release sometime in 2017, or would the community prefer a v3.x
(not backwards compatible but potentially more features) sometime in late
2017 to early 2018?

BOF attendees expressed an interest in having a list of features that might
make it in to v2.2.x and ones that the Open MPI developers think would be
too hard to back port from the development branch (master) to a v2.2.x
release stream.

Here are the requested lists:

Features that we anticipate we could port to a v2.2.x release

1. Improved collective performance (a new “tuned” module)
2. Enable Linux CMA shared memory support by default
3. PMIx 3.0 (If new functionality were to be used in this release of
Open MPI)

Features that we anticipate would be too difficult to port to a v2.2.x
release

1. Revamped CUDA support
2. MPI_ALLOC_MEM integration with memkind
3. OpenMP affinity/placement integration
4. THREAD_MULTIPLE improvements to MTLs (not so clear on the level of
difficult for this one)

You can register your opinion on whether to go with a v2.2.x release next
year or to go from v2.1.x to v3.x in late 2017 or early 2018 at the link
below:

https://www.open-mpi.org/sc16/

Thanks very much,

Howard
--
Howard Pritchard

HPC-DES

Los Alamos National Laboratory
Jeff Hammond
2016-11-22 23:27:38 UTC
Permalink
1. MPI_ALLOC_MEM integration with memkind
It would sense to prototype this as a standalone project that is
integrated with any MPI library via PMPI. It's probably a day or two of
work to get that going.

Jeff
--
Jeff Hammond
***@gmail.com
http://jeffhammond.github.io/
Howard Pritchard
2016-11-22 23:45:14 UTC
Permalink
Hi Jeff,

I don't think it was the use of memkind itself, but a need to refactor the
way Open MPI is using info objects
that was the issue. I don't recall the details.

Howard
Post by Jeff Hammond
1. MPI_ALLOC_MEM integration with memkind
It would sense to prototype this as a standalone project that is
integrated with any MPI library via PMPI. It's probably a day or two of
work to get that going.
Jeff
--
Jeff Hammond
http://jeffhammond.github.io/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Nathan Hjelm
2016-11-23 15:04:37 UTC
Permalink
Integration is already in the 2.x branch. The problem is the way we handle the info key is a bit of a hack. We currently pull out one info key and pass it down to the mpool as a string. Ideally we want to just pass the info object so each mpool can define its own info keys. That requires the info work done by IBM which may be difficult to port to 2.x.

-Nathan
Post by Howard Pritchard
Hi Jeff,
I don't think it was the use of memkind itself, but a need to refactor the way Open MPI is using info objects
that was the issue. I don't recall the details.
Howard
• MPI_ALLOC_MEM integration with memkind
It would sense to prototype this as a standalone project that is integrated with any MPI library via PMPI. It's probably a day or two of work to get that going.
Jeff
--
Jeff Hammond
http://jeffhammond.github.io/
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Loading...