Reputation: 11
I am trying to use MPI_TYPE_VECTOR in order to exchange information between ghosts cells of a 2D computational domain. The test case is the following:
The topology is made of two blocks in the vertical direction and one block in the other directions. The objective is to fill the ghost cells of each block with the corresponding cells of the neighbor block.
If I don't use the derived types from MPI, the communications are just fine.
PROGRAM test
USE mpi
IMPLICIT NONE
INTEGER*4, PARAMETER :: kim = 3 ! Number of cells (horizontal)
& , kjm = 2 ! Number of cells (vertical)
& , is = 1 ! Number of ghost cells
INTEGER*4, DIMENSION(1-is:kim+is,1-is:kjm+is)
& :: vect2d ! 2D vector representing the computational domain
INTEGER*4 :: i, j
! MPI Parameters
INTEGER*4, PARAMETER :: ndims = 2
INTEGER*4 :: mpicode
INTEGER*4 :: nb_procs
INTEGER*4 :: rank
INTEGER*4 :: comm
INTEGER*4 :: etiquette = 100
INTEGER*4, DIMENSION (ndims) :: dims
INTEGER*4, DIMENSION (ndims) :: coords
LOGICAL, DIMENSION (ndims) :: periods
LOGICAL :: reorganisation
INTEGER, DIMENSION(MPI_STATUS_SIZE) :: statut
INTEGER*4, DIMENSION (2*ndims) :: neighbors
INTEGER*4 :: type_column
! Initialisation MPI
CALL MPI_INIT (mpicode)
CALL MPI_COMM_SIZE (MPI_COMM_WORLD, nb_procs, mpicode)
CALL MPI_COMM_RANK (MPI_COMM_WORLD, rank, mpicode)
dims(1) = 1
dims(2) = 2
periods = .FALSE.
reorganisation = .FALSE.
CALL MPI_CART_CREATE (MPI_COMM_WORLD, ndims, dims, periods,
& reorganisation, comm, mpicode)
CALL MPI_CART_COORDS (comm, rank, ndims, coords, mpicode)
CALL MPI_CART_SHIFT (comm, 0, 1, neighbors(1), neighbors(2),
& mpicode)
CALL MPI_CART_SHIFT (comm, 1, 1, neighbors(3), neighbors(4),
& mpicode)
! Initialisation domain
DO i = 1-is, kim+is
DO j = 1-is, kjm +is
vect2d(i,j) = rank
END DO
END DO
! Communications
! Without vector type
DO i = 1, kim
CALL MPI_RECV (vect2d(i,1-is:0),
& size(vect2d(i,1-is:0)),
& MPI_INTEGER, neighbors(3), etiquette, comm,
& statut, mpicode)
CALL MPI_SEND (vect2d(i,kjm+1-is:kjm),
& size(vect2d(i,kjm+1-is:kjm)),
& MPI_INTEGER, neighbors(4), etiquette, comm,
& mpicode)
CALL MPI_RECV (vect2d(i,kjm+1:kjm+is),
& size(vect2d(i,kjm+1:kjm+is)),
& MPI_INTEGER, neighbors(4), etiquette, comm,
& statut, mpicode)
CALL MPI_SEND (vect2d(i,1:is),
& size(vect2d(i,1:is)),
& MPI_INTEGER, neighbors(3), etiquette, comm,
& mpicode)
END DO
! Write
OPEN (10,
& file = "test.dat",
& form = "formatted",
& status = "unknown")
IF (rank .EQ.1 ) WRITE(10,*) vect2D
CLOSE(10)
CALL MPI_FINALIZE(mpicode)
END PROGRAM test
In this case the first line of the upper block is [10001], so the ghost cells are filled with the value of the lower block.
With MPI_TYPE_VECTOR
PROGRAM test
USE mpi
IMPLICIT NONE
INTEGER*4, PARAMETER :: kim = 3 ! Number of cells (horizontal)
& , kjm = 2 ! Number of cells (vertical)
& , is = 1 ! Number of ghost cells
INTEGER*4, DIMENSION(1-is:kim+is,1-is:kjm+is)
& :: vect2d ! 2D vector representing the computational domain
INTEGER*4 :: i, j
! MPI Parameters
INTEGER*4, PARAMETER :: ndims = 2
INTEGER*4 :: mpicode
INTEGER*4 :: nb_procs
INTEGER*4 :: rank
INTEGER*4 :: comm
INTEGER*4 :: etiquette = 100
INTEGER*4, DIMENSION (ndims) :: dims
INTEGER*4, DIMENSION (ndims) :: coords
LOGICAL, DIMENSION (ndims) :: periods
LOGICAL :: reorganisation
INTEGER, DIMENSION(MPI_STATUS_SIZE) :: statut
INTEGER*4, DIMENSION (2*ndims) :: neighbors
INTEGER*4 :: type_column
!------------------------------------------------------------------------------
! Initialisation MPI
CALL MPI_INIT (mpicode)
CALL MPI_COMM_SIZE (MPI_COMM_WORLD, nb_procs, mpicode)
CALL MPI_COMM_RANK (MPI_COMM_WORLD, rank, mpicode)
dims(1) = 1
dims(2) = 2
periods = .FALSE.
reorganisation = .FALSE.
CALL MPI_CART_CREATE (MPI_COMM_WORLD, ndims, dims, periods,
& reorganisation, comm, mpicode)
CALL MPI_CART_COORDS (comm, rank, ndims, coords, mpicode)
CALL MPI_CART_SHIFT (comm, 0, 1, neighbors(1), neighbors(2),
& mpicode)
CALL MPI_CART_SHIFT (comm, 1, 1, neighbors(3), neighbors(4),
& mpicode)
CALL MPI_TYPE_VECTOR(is, 1, (kim+2*is),
& MPI_INTEGER, type_column, mpicode)
CALL MPI_TYPE_COMMIT(type_column, mpicode)
!------------------------------------------------------------------------------
! Initialisation domain
DO i = 1-is, kim+is
DO j = 1-is, kjm +is
vect2d(i,j) = rank
END DO
END DO
!------------------------------------------------------------------------------
! Communications
! With vector type
DO i = 1, kim
CALL MPI_RECV (vect2d(j,1-is),
& 1,
& type_column, neighbors(3), etiquette, comm,
& statut, mpicode)
CALL MPI_SEND (vect2d(i,kjm+1-is),
& 1,
& type_column, neighbors(4), etiquette, comm,
& mpicode)
CALL MPI_RECV (vect2d(i,kjm+1),
& 1,
& type_column, neighbors(4), etiquette, comm,
& statut, mpicode)
CALL MPI_SEND (vect2d(i,1),
& 1,
& type_column, neighbors(3), etiquette, comm,
& mpicode)
END DO
!------------------------------------------------------------------------------
! Write
OPEN (10,
& file = "test.dat",
& form = "formatted",
& status = "unknown")
IF (rank .EQ.1 ) WRITE(10,*) vect2D
CLOSE(10)
CALL MPI_FINALIZE(mpicode)
END PROGRAM test
In this case the output isn't the expected one, as I obtain : [11110] I suppose that it is from the definition of my type_column, but I cannot figure it out ... so any help and explanation is welcome !
Thanks
EDIT : Here is the domain associated with the matrix in the test case
(0,3) | (1,3) | (2,3) | (3,3) | (4,3)
(0,2) | (1,2) | (2,2) | (3,2) | (4,2)
(0,1) | (1,1) | (2,1) | (3,1) | (4,1)
(0,0) | (1,0) | (2,0) | (3,0) | (4,0)
For what I understood, the memory disposition associated with this matrix is :
(0,0) | (1,0) | (2,0) | (3,0) | (4,0)|(0,1) | (1,1) | (2,1) | (3,1) | (4,1) ...
Upvotes: 1
Views: 944
Reputation: 1764
The main confusion here is that you have used the terminology "column", but the way you have drawn the arrays means that you are actually swapping rows. Let's stick with your convention for drawing the arrays and call the directions "horizontal" (increasing value of first index "i") and "vertical" (increasing value of second index "j") to avoid confusion.
If we fix your i <-> j bug as mentioned in the comments then your code works. However, the whole point about using vectors is to avoid multiple sends so, as written, your vector code doesn't improve on using basic datatypes. For the pattern shown above then, at least for a single-depth halo, you could just send contiguous data with count=kim. For deeper halos you would need a vector type with count=is, blocksize=kim and stride=kim+2*is. Having done this, you would use count=1 when sending and receiving and you transfer all the halo with a single call.
For halo swaps in the horizontal direction (i.e. swapping vertical halos) then you always need a vector type with count=kjm, blocksize=is and stride=km+2*is. Again, use count=1 when sending and receiving.
There are other issues with issuing the receives first (potential deadlock for periodic boundary conditions) which can't be entirely solved by putting the sends first (since MPI_Send could be implemented synchronously) but for your particular test code the solution lies in being consistent with terminology and using appropriate vector types.
Upvotes: 1