Mac cchiatooo
Mac cchiatooo

Reputation: 53

The MPI_IO write_all subarray and the component number

I am tring to write a 4 * 4 array using MPI_SET_VIEW and MPI_WRITE_ALL. The xx is a 4 * 4 array and I expect xx = (0,1,2,3; 0,1,2,3; 0,1,2,3; 0,1,2,3) for this code. The globle_size and the local_size equal to 4 and 2. I first create a subarray 2 * 2 file type, so that I divide this 4 * 4 array into 4 parts which is 2 * 2. Then I set view of this file type, and write the xx. The result shold equal to (0,1,2,3; 0,1,2,3; 0,1,2,3; 0,1,2,3), however, it is not. Some of the results are right, and some are wrong.

1 When I do j=ls2,le2, i = ls1,le1, xx(i,j)=i, is array xx() a 4*4 array or 2 * 2 array? ls1=ls2=0,le1=le2=1.

2 For MPI_WRITE_ALL, Should I use 4 * 4 array or 2 * 2 array? and what should I put for the count?1 or 4?

3 For MPI_WRITE_ALL, Should I use filetype as typestyle?

  integer::filesize,buffsize,i,Status(MPI_STATUS_SIZE),charsize,disp,filetype,j,count
  integer::nproc,cart_comm,ierr,fh,datatype
  
  INTEGER(KIND=MPI_OFFSET_KIND) offset
  integer,dimension(dim):: sizes,inersizes,start,sb,ss
  character:: name*50,para*100,zone*100


  do j=local_start(2),local_end(2)
     do i=local_start(1),local_end(1)
        xx(i,j)=i
     enddo
  enddo

  count=1
  offset=0
  start=cart_coords*local_length



  call MPI_TYPE_CREATE_SUBARRAY(2,global_length,local_length,start,MPI_ORDER_FORTRAN,&
  MPI_integer,filetype,ierr)
  call MPI_TYPE_COMMIT(filetype,ierr)

  call MPI_File_open(MPI_COMM_WORLD,'out.dat', &
  MPI_MODE_WRONLY + MPI_MODE_CREATE,MPI_INFO_NULL,fh,ierr)


  call MPI_File_set_view(fh,offset,MPI_integer,filetype,&
  "native",MPI_INFO_NULL,ierr)
  CALL MPI_FILE_WRITE(fh, xx,1, filetype, MPI_STATUS_ignore, ierr)

Upvotes: 0

Views: 191

Answers (1)

Ian Bush
Ian Bush

Reputation: 7433

Below is a code which I think does what you want. It is based upon what you posted yesterday and then deleted - please don't do this, rather edit the question to improve it. I have also changed to using a 6x4 global size and a 3x2 process grid as rectangular grids are more likely to catch bugs.

Anyway to answer your questions

1 - You only store a part of the array locally, so the array needs to be declared as only (1:2,1:2) - this is almost the whole point of distributed memory programming, each process only holds a part of the whole data structure

2 - You only have a 2x2 array locally, so it should be a 2x2 array holding whatever data is to be stored locally. You are writing an array of integers, so I think it is simplest to say you are writing 4 integers

3 - See above - You are writing an array of integers, so I think it is simplest to say you are writing 4 integers. The filetype is (in my experience) only used in the call to MPI_File_set_view to describe the layout of the data in the file via the filetype argument. When you actually write data just tell mpi_file_write and friends what you are writing

ijb@ijb-Latitude-5410:~/work/stack$ mpif90 --version
GNU Fortran (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

ijb@ijb-Latitude-5410:~/work/stack$ mpif90 --showme:version
mpif90: Open MPI 4.0.3 (Language: Fortran)
ijb@ijb-Latitude-5410:~/work/stack$ cat mpiio.f90
Program test
  Use mpi
  Implicit None
  Integer::rank,nproc,ierr,filetype,cart_comm
  Integer::fh
  Integer(kind=mpi_offset_kind):: offset=0
  Integer,Dimension(2,2)::buff
  Integer::gsize(2)
  Integer::start(2)
  Integer::subsize(2)
  Integer::coords(2)
  Integer:: nprocs_cart(2)=(/3,2/)
  Integer :: i, j
  Logical::periods(2)
  Character( Len = * ), Parameter :: filename = 'out.dat'

  gsize= [ 6,4 ]
  subsize= [ 2,2 ]
  periods = [ .False., .False. ]
  offset=0
  
  Call MPI_init(ierr)
  Call MPI_COMM_SIZE(MPI_COMM_WORLD, nproc, ierr)
  Call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
  Call MPI_Dims_create(nproc, 2, nprocs_cart, ierr)
  Call MPI_Cart_create(MPI_COMM_WORLD, 2, nprocs_cart, periods, .True., &
       cart_comm, ierr)
  Call MPI_Comm_rank(cart_comm, rank, ierr)
  Call MPI_Cart_coords(cart_comm, rank, 2, coords, ierr)

  start=coords * subsize

  Do j = 1, 2
     Do i = 1, 2
        buff( i, j ) = ( start( 1 ) + ( i - 1 ) ) + &
             ( start( 2 ) + ( j - 1 ) ) * gsize( 1 )
     End Do
  End Do

  Call MPI_TYPE_CREATE_SUBARRAY(2,gsize,subsize,start,MPI_ORDER_FORTRAN,&
       MPI_integer,filetype,ierr)
  Call MPI_TYPE_COMMIT(filetype,ierr)

  ! For testing make sure we have a fresh file every time
  ! so don't get confused by looking at the old version
  If( rank == 0 ) Then
     Call mpi_file_delete( filename, MPI_INFO_NULL, ierr )
  End If
  Call mpi_barrier( mpi_comm_world, ierr )

  ! Open in exclusive mode making sure the delete has occurred
  Call MPI_File_open(MPI_COMM_WORLD,filename,&
       MPI_MODE_WRONLY + MPI_MODE_CREATE + MPI_MODE_EXCL, MPI_INFO_NULL, fh,ierr)

  Call MPI_File_set_view(fh,offset,MPI_integer,filetype,&
       "native",MPI_INFO_NULL,ierr)

  Call MPI_FILE_WRITE_all(fh, buff, 4, mpi_integer, MPI_STATUS_ignore, ierr)


  Call MPI_File_close(fh,ierr)
  Call MPI_FINALIZE(ierr)
  
End Program test
ijb@ijb-Latitude-5410:~/work/stack$ mpif90 -Wall -Wextra -fcheck=all -O -g -std=f2008 -fcheck=all mpiio.f90 
ijb@ijb-Latitude-5410:~/work/stack$ mpirun --oversubscribe -np 6 ./a.out 
ijb@ijb-Latitude-5410:~/work/stack$ od -v -Ad -t d4 out.dat
0000000           0           1           2           3
0000016           4           5           6           7
0000032           8           9          10          11
0000048          12          13          14          15
0000064          16          17          18          19
0000080          20          21          22          23
0000096

Upvotes: 0

Related Questions