Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow overlapping regions in MPI_Scatter[v] #840

Open
devreal opened this issue Mar 11, 2024 · 5 comments
Open

Allow overlapping regions in MPI_Scatter[v] #840

devreal opened this issue Mar 11, 2024 · 5 comments
Labels
mpi-6 For inclusion in the MPI 5.1 or 6.0 standard wg-collectives Collectives Working Group

Comments

@devreal
Copy link

devreal commented Mar 11, 2024

Problem

Section 6.6 contains the following constraints for MPI_Scatter and MPI_Scatterv:

The specification of counts, types, and displacements should not cause any location on the root to be read more than once.

The accompanying rationale says:

Rationale. Though not needed, the last restriction is imposed so as to achieve
symmetry with MPI_GATHER, where the corresponding restriction (a multiple-write
restriction) is necessary. (End of rationale.)

Someone working on collective components in Open MPI reported that there is at least one widely used applications out there that provides overlapping segments to MPI_Scatterv, and gets away with it.

Proposal

Remove the sentence and the rationale for it. There is no reason for the restriction and the symmetry with MPI_Gather serves no purpose.

Changes to the Text

TBD

Impact on Implementations

Given that applications have been violating this constraint and got away with it I don't expect that any implementation exploits it. So there should be no impact on implementations.

Impact on Users

Freedom to scatter overlapping regions!

References and Pull Requests

TBD

@devreal devreal added wg-collectives Collectives Working Group mpi-5.0 For inclusion in the MPI 5.0 standard labels Mar 11, 2024
@github-project-automation github-project-automation bot moved this to To Do in MPI Next Mar 11, 2024
@jeffhammond
Copy link
Member

Is there a restriction on the input buffer of alltoallv?

@devreal
Copy link
Author

devreal commented Mar 11, 2024

I don't see such restriction on alltoall or alltoallv. There probably should be a restriction on the output buffers of these operations?

@bosilca
Copy link
Member

bosilca commented Mar 11, 2024

Any such restrictions will be non-enforceable by MPI because detecting overlap (in the general case) is P-complete.

@devreal
Copy link
Author

devreal commented Mar 11, 2024

I think the restriction on the output buffers should be along the lines of "the result of overlapping regions in the result buffer is undefined", i.e., MPI can deposit data in any order.

@bosilca
Copy link
Member

bosilca commented Mar 11, 2024

Here is what we are saying about this in the datatype chapter (5.1.11):

A datatype may specify overlapping entries. The use of such a datatype in any communication in association with a buffer updated by the operation is erroneous. (This is erroneous even if the actual message received is short enough not to write any entry more than once.)

And then in 6.9.1:

Overlapping datatypes are permitted in “send” buffers. Overlapping datatypes in “receive” buffers are erroneous and may give unpredictable results.

The right approach is not to overspecify, but let the same logic apply.

@wesbland wesbland added mpi-6 For inclusion in the MPI 5.1 or 6.0 standard and removed mpi-5.0 For inclusion in the MPI 5.0 standard labels Jan 9, 2025
@wesbland wesbland removed this from MPI Next Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mpi-6 For inclusion in the MPI 5.1 or 6.0 standard wg-collectives Collectives Working Group
Projects
Status: To Do
Development

No branches or pull requests

4 participants