Reputation: 28500
I use MPI_TYPE_CREATE_SUBARRAY
to create a type used to communicate portions of 3D arrays between neighboring processes in a Cartesian topology. Specifically, each process communicates with the two processes on the two sides along each of the three directions.
Referring for simplicity to a one-dimensional grid, there are two parameters nL
and nR
that define how many values each process has to receive from the left and send to the right, and how many each has to receive from the right and send to the left.
Unaware (or maybe just forgetful) of the fact that all elements of the array_of_subsizes
array parameter of MPI_TYPE_CREATE_SUBARRAY
must be positive, I wrote my code that can't deal with the case nR = 0
(or nL = 0
, either can be).
(By the way, I see that MPI_TYPE_VECTOR
does accept zero count
and blocklength
arguments and it's sad MPI_TYPE_CREATE_SUBARRAY
can't.)
How would you suggest to face this problem? Do I really have to convert each call to MPI_TYPE_CREATE_SUBARRAY
into multiple MPI_TYPE_VECTOR
s called in a chain?
The following code is minimal but not working (but it works in the larger program and I haven't had time to extract the minimum number of declarations and prints), still it should give a better look into what I'm talking about.
INTEGER :: ndims = 3, DBS, ierr, temp, sub3D
INTEGER, DIMENSION(ndims) :: aos, aoss
CALL MPI_TYPE_SIZE(MPI_DOUBLE_PRECISION, DBS, ierr)
! doesn't work if ANY(aoss == 0)
CALL MPI_TYPE_CREATE_SUBARRAY(ndims, aos, aoss, [0,0,0], MPI_ORDER_FORTRAN, MPI_DOUBLE_PRECISION, sub3D, ierr)
! does work if ANY(aoss == 0)
CALL MPI_TYPE_HVECTOR(aoss(2), aoss(1), DBS*aos(1), MPI_DOUBLE_PRECISION, temp, ierr)
CALL MPI_TYPE_HVECTOR(aoss(3), 1, DBS*PRODUCT(aos(1:2)), temp, sub3D, ierr)
At the end it wasn't hard to replace MPI_TYPE_CREATE_SUBARRAY
with two MPI_TYPE_HVECTOR
s. Maybe this is the best solution, after all.
In this sense one side question comes naturally for me: why is MPI_TYPE_CREATE_SUBARRAY
so limited? There are a lot of examples in the MPI standard of stuff which correctly falls back on "do nothing" (when a sender or receiver is MPI_PROC_NULL
) or "there's nothing in this" (when aoss
has a zero dimension in my example). Should I post a feature request somewhere?
Upvotes: 0
Views: 174
Reputation: 8395
The MPI 3.1 standard (chapter 4.1 page 95) makes it crystal clear
For any dimension i, it is erroneous to specify array_of_subsizes[i] < 1 [...].
You are free to send your comment to the appropriate Mailing List.
Upvotes: 1