DConstable
DConstable

Reputation: 55

MPI_TYPE_CREATE_STRUCT: invalid datatype

I have a subroutine as part of a larger Fortran program which will not run when called with mpi on a Mac laptop. The program is compiled using mpifort, and will run perfectly well in serial mode.

The program will run successfully when compiled with mpifort on a HEC - the included mpif.h files for each are indicated towards the top of the code below.

I've seen a previous post (Invalid datatype when running mpirun) which discusses changing the number of blocks to get around this error; however, this does not explain why the program will run on a different architecture.

subroutine Initialise_f(Nxi, XI, BBC, B, nprofile, &
 kTzprofile, kTpprofile, vzprofile, Nspecies, particle, &
 neighbourLeft, neighbourRight, comm1d)

  use SpecificTypes

  implicit none
  include '/opt/pgi/osx86-64/2017/mpi/mpich/include/mpif.h' !MPI for MAC
  !include '/usr/shared_apps/packages/openmpi-1.10.0-gcc/include/mpif.h' !MPI for HEC

  type maxvector
     double precision, allocatable :: maxf0(:)
  end type maxvector

  integer Nxi, Nspecies
  double precision XI(Nxi), BBC(2), B(Nxi), nprofile(Nxi,Nspecies), &
   kTzprofile(Nxi,Nspecies), kTpprofile(Nxi,Nspecies), &
   vzprofile(Nxi,Nspecies)
  type(species) particle(Nspecies)
  integer neighbourLeft, neighbourRight, comm1d

! Variables for use with mpi based communication
  integer (kind=MPI_ADDRESS_KIND) :: offsets(2)
  integer ierr, blockcounts(2), tag, oldtypes(2), &
   b_type_sendR, b_type_sendL, b_type_recvR, b_type_recvL, &
   istart, Nmess, rcount, fnodesize, fspecsize, maxf0size, &
   fspecshape(2), requestIndex, receiveLeftIndex, receiveRightIndex

! Allocate communication buffers if necessary
  fnodesize = sum( particle(:)%Nvz * particle(:)%Nmu )
  Nmess = 0
  if (neighbourLeft>-1) then
     Nmess = Nmess + 2
     allocate( send_left%ivzoffsets(Nspecies*2) )
     allocate( send_left%fs(fnodesize*2 ))
     allocate( receive_left%ivzoffsets(Nspecies*2) )
     allocate( receive_left%fs(fnodesize*2) )
     send_left%ivzoffsets = 0
     send_left%fs = 0.0d0
     receive_left%ivzoffsets = 0
     receive_left%fs = 0.0d0
  end if

! Build a few mpi data types for communication purposes
  oldtypes(1) = MPI_INTEGER
  blockcounts(1) = Nspecies*2
  oldtypes(2) = MPI_DOUBLE_PRECISION
  blockcounts(2) = fnodesize*2

  if (neighbourLeft>-1) then
     call MPI_GET_ADDRESS(receive_left%ivzoffsets, offsets(1), ierr)
     call MPI_GET_ADDRESS(receive_left%fs, offsets(2), ierr)
     offsets = offsets-offsets(1)
     call MPI_TYPE_CREATE_STRUCT(2,blockcounts,offsets,oldtypes,b_type_recvL,ierr)
     call MPI_TYPE_COMMIT(b_type_recvL, ierr)
     call MPI_GET_ADDRESS(send_left%ivzoffsets, offsets(1), ierr)
     call MPI_GET_ADDRESS(send_left%fs, offsets(2), ierr)
     offsets = offsets-offsets(1)
     call MPI_TYPE_CREATE_STRUCT(2,blockcounts,offsets,oldtypes,b_type_sendL,ierr)
     call MPI_TYPE_COMMIT(b_type_sendL, ierr)
  end if

This will bail out with the following error:

[dyn-191-250:31563] *** An error occurred in MPI_Type_create_struct
[dyn-191-250:31563] *** reported by process [1687683073,0]
[dyn-191-250:31563] *** on communicator MPI_COMM_WORLD
[dyn-191-250:31563] *** MPI_ERR_TYPE: invalid datatype
[dyn-191-250:31563] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[dyn-191-250:31563] ***    and potentially your MPI job)

Upvotes: 2

Views: 441

Answers (1)

Gilles Gouaillardet
Gilles Gouaillardet

Reputation: 8380

On your mac, you include mpif.h from MPICH, but the error message is from Open MPI.

you should simply include 'mpif.h' and use the MPI wrappers (e.g. mpifort) to build your application.

A better option is to use mpi and an even better one is to use mpi_f08 if your MPI and compilers support it (note the latter option requires you to update your code).

Upvotes: 2

Related Questions