Reputation: 145
I have a subroutine that is supposed to mix values in an array W % R
between different processors using MPI_SEND
. It works on my laptop (in the sense it doesn't crash) with both Intel and gfortran compilers. But when I run it on an HPC the program freezes the first time the subroutine is called.
SUBROUTINE mix_walkers( W )
include 'mpif.h'
TYPE(walkerList), INTENT(INOUT) :: W
INTEGER, SAVE :: calls = 0
INTEGER :: ierr, nthreads, rank, width, self, send, recv, sendFrstWlkr, sendLstWlkr, sendWlkrcount, &
recvFrstWlkr, recvlstWlkr, recvWlkrcount, status
calls = calls + 1
CALL MPI_COMM_SIZE( MPI_COMM_WORLD, nthreads, ierr )
CALL MPI_COMM_RANK ( MPI_COMM_WORLD, rank, ierr )
width = W % nwlkr / nthreads
IF( MODULO( calls, nthreads ) == 0 ) calls = calls + 1
send = MODULO( rank + calls, nthreads )
recv = MODULO( rank - calls, nthreads )
sendFrstWlkr = width * send + 1
recvFrstWlkr = width * recv + 1
sendLstWlkr = MIN( sendFrstWlkr - 1 + width, W % nwlkr )
recvlstWlkr = MIN( recvFrstWlkr - 1 + width, W % nwlkr )
sendWlkrcount = SIZE( W % R( :, :, sendFrstWlkr : sendlstWlkr ) )
recvWlkrcount = SIZE( W % R( :, :, recvFrstWlkr : recvlstWlkr ) )
IF( send == rank ) RETURN
ASSOCIATE( sendWalkers => W % R( :, :, sendFrstWlkr : sendlstWlkr ) , &
recvWalkers => W % R( :, :, recvFrstWlkr : recvLstWlkr ) )
CALL MPI_SEND( sendWalkers, sendWlkrcount, MPI_DOUBLE_PRECISION, send, calls, MPI_COMM_WORLD, ierr )
CALL MPI_RECV( recvWalkers, recvWlkrcount, MPI_DOUBLE_PRECISION, recv, calls, MPI_COMM_WORLD, status, ierr )
END ASSOCIATE
END SUBROUTINE mix_walkers
Upvotes: 1
Views: 154
Reputation: 7434
MPI_SEND is blocking. It is not guaranteed to return until the process which is being sent to posts a corresponding receive. In the code you have all the recieves may never be reached as the process may be waiting in the send. To fix this investigate MPI_ISEND/MPI_IRECV and MPI_WAIT, or MPI_SENDRECV.
For more details see section 3.4 in the MPI standard at https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf
Upvotes: 6