Reputation: 215
Fortran code using MPI_Waitall will not compile on my system with gfortran and the openmpi library, but the same code does compile with intel fortran compilers (and the intel mpi implementation). I can't figure out how or why this is happening. Here is a minimal/trivial example of code that fails to compile:
test.f90
program TEST_WAITALL
use MPI
implicit none
integer :: ierr
integer :: N
integer, allocatable :: reqs(:)
integer :: to, tag, count
double precision :: data(100)
N = 5
reqs = 1
call MPI_INIT( ierr )
call MPI_Waitall(N, reqs, MPI_STATUS_IGNORE, ierr)
call MPI_SEND( data, count, MPI_DOUBLE_PRECISION, to, tag, MPI_COMM_WORLD, ierr )
call MPI_FINALIZE ( ierr )
end program TEST_WAITALL
Note this is not intended to run, and is clearly "nonsense" code.
When I compile with the intel toolchain, there are no issues:
mpiifort test.f90
When I compile with the gfortran/openmpi toolchain I get an error:
mpifort test.f90
test.f90:17:54:
call MPI_Waitall(N, reqs, MPI_STATUS_IGNORE, ierr)
1
Error: There is no specific subroutine for the generic ‘mpi_waitall’ at (1)
If I comment out just the call to MPI_Waitall, then the code compiles with both gfortran/openmpi and intel. So MPI is being found and linked against.
My first guess would be that somehow I've used the wrong interface to MPI_Waitall, and that I'm trying to pass arguments of the wrong type, but as far as I can see from online documentation, I can't see any problems with types.
I've also changed the third argument from an array of statuses to MPI_STATUS_IGNORE to see if this fixes the problem, but it did not.
I have openmpi-bin installed through the package manager (apt) on ubuntu, and compiler info is as follows:
mpifort -v
Using built-in specs.
COLLECT_GCC=/usr/bin/gfortran
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/7/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 7.3.0-27ubuntu1~18.04' --with-bugurl=file:///usr/share/doc/gcc-7/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --prefix=/usr --with-gcc-major-version-only --program-suffix=-7 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04)
Upvotes: 0
Views: 1149
Reputation: 60088
You have to use MPI_STATUSES_IGNORE and not MPI_STATUS_IGNORE because it is multiple statuses, each for each request.
Upvotes: 1