slitvinov
slitvinov

Reputation: 5768

assign executables to specific coordinates in MPI Cartesian topology

I want to assign every executable to specific coordinates in Cartesian topology. Please consider an example and a Makefile. I compile executables main[0-7] and run them using a command:

mpiexec \
-np 1 ./main0 : -np 1 ./main1 : \
-np 1 ./main2 : -np 1 ./main3 : \
-np 1 ./main4 : -np 1 ./main5 : \
-np 1 ./main6 : -np 1 ./main7

If I sort the output make run | sort -g it always returns:

N, rank, coords[3]: 0 0   0 0 0
N, rank, coords[3]: 1 1   0 0 1
N, rank, coords[3]: 2 2   0 1 0
N, rank, coords[3]: 3 3   0 1 1
N, rank, coords[3]: 4 4   1 0 0
N, rank, coords[3]: 5 5   1 0 1
N, rank, coords[3]: 6 6   1 1 0
N, rank, coords[3]: 7 7   1 1 1

Here N is an id of an executable. It seems work as I want: main0 goes to coordinates (0, 0, 0), main1 goes to coordinates (0, 0, 1) and so on. Does the standard fix this arrangment?

Makefile

all: main0 main1 main2 main3 main4 main5 main6 main7

cc=mpicc
main0: main.c; $(cc) -DN=0 $< -o $@
main1: main.c; $(cc) -DN=1 $< -o $@
main2: main.c; $(cc) -DN=2 $< -o $@
main3: main.c; $(cc) -DN=3 $< -o $@
main4: main.c; $(cc) -DN=4 $< -o $@
main5: main.c; $(cc) -DN=5 $< -o $@
main6: main.c; $(cc) -DN=6 $< -o $@
main7: main.c; $(cc) -DN=7 $< -o $@

run: all
    mpiexec \
    -np 1 ./main0 : -np 1 ./main1 : \
    -np 1 ./main2 : -np 1 ./main3 : \
    -np 1 ./main4 : -np 1 ./main5 : \
    -np 1 ./main6 : -np 1 ./main7

clean:; rm -f main[0-7]

main.c

#include <stdio.h>
#include <mpi.h>

#define ndims 3
int rank, coords[ndims];
int    dims[ndims] = {2, 2, 2};
int periods[ndims] = {0, 0, 0};
int reorder = 0;
MPI_Comm cart;

int main(int argc, char *argv[]) {
  MPI_Init(&argc, &argv);
  MPI_Cart_create(MPI_COMM_WORLD, ndims, dims, periods, reorder,   &cart);

  MPI_Comm_rank(cart,   &rank);
  MPI_Cart_coords(cart, rank, ndims,   coords);
  printf("N, rank, coords[3]: %d %d   %d %d %d\n", N, rank, coords[0], coords[1], coords[2]);

  MPI_Finalize();
}

Upvotes: 0

Views: 58

Answers (1)

Zulan
Zulan

Reputation: 22670

Does the standard fix this arrangment?

Yes.

If you use mpiexec1 with the following form:

mpiexec { <above arguments> } : { ... } : { ... } : ... : { ... }

8.8 Portable MPI Process Startup says it corresponds to MPI_COMM_SPAWN_MULTIPLE.

In turn,

Their ranks in MPI_COMM_WORLD correspond directly to the order in which the commands are specified in MPI_COMM_SPAWN_MULTIPLE. Assume that m1 processes are generated by the first command, m2 by the second, etc. The processes corresponding to the first command have ranks 0, 1, . . . , m1 −1. The processes in the second command have ranks m1 , m1 +1, . . . , m1 + m2 − 1. The processes in the third have ranks m1 + m2 , m1 + m2 + 1, . . . , m1 + m2 + m3 − 1, etc.

That gives you a consistent mapping from command line to MPI_COMM_WORLD rank. 7.2 Virtual Topologies then gives you the mapping to the coordinates:

Process coordinates in a Cartesian structure begin their numbering at 0. Row-major numbering is always used for the processes in a Cartesian structure.

That said, the approach of compiling a different binary for each rank that just differs by one macro seems silly and not quite elegant.

1: The sepcifications only apply to mpiexec, not mpirun. MPI implementations are not required to provide mpiexec.

Upvotes: 1

Related Questions