Reputation: 31
I am new to MPI and I am writing a simple MPI program to get the dot product of matrix and vector, namely, A*b=c. However, my code doesn't work. The source code is listed below.
If I replace the declaration of A, b, c and buffer by
double A[16], b[4], c[4], buffer[8];
and comment those lines related to allocate and free operations, my code works and the result is correct. In this case, I was wondering that the trouble should be related to pointers, but I have no idea to shoot the trouble.
One more thing, in my code, buffer has only 4 elements, but the buffer size must greater than 8, or it doesn't work.
#include<mpi.h>
#include<iostream>
#include<stdlib.h>
using namespace std;
int nx = 4, ny = 4, nxny;
int ix, iy;
double *A = nullptr, *b = nullptr, *c = nullptr, *buffer = nullptr;
double ans;
// info MPI
int myGlobalID, root = 0, numProc;
int numSent;
MPI_Status status;
// functions
void get_ixiy(int);
int main(){
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &numProc);
MPI_Comm_rank(MPI_COMM_WORLD, &myGlobalID);
nxny = nx * ny;
A = new double(nxny);
b = new double(ny);
c = new double(nx);
buffer = new double(ny);
if(myGlobalID == root){
// init A, b
for(int k = 0; k < nxny; ++k){
get_ixiy(k);
b[iy] = 1;
A[k] = k;
}
numSent = 0;
// send b to each worker processor
MPI_Bcast(&b, ny, MPI_DOUBLE, root, MPI_COMM_WORLD);
// send a row of A to each worker processor, tag with row number
for(ix = 0; ix < min(numProc - 1, nx); ++ix){
for(iy = 0; iy < ny; ++iy){
buffer[iy] = A[iy + ix * ny];
}
MPI_Send(&buffer, ny, MPI_DOUBLE, ix+1, ix+1, MPI_COMM_WORLD);
numSent += 1;
}
for(ix = 0; ix < nx; ++ix){
MPI_Recv(&ans, 1, MPI_DOUBLE, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
int sender = status.MPI_SOURCE;
int ansType = status.MPI_TAG;
c[ansType] = ans;
// send another row to worker process
if(numSent < nx){
for(iy = 0; iy < ny; ++iy){
buffer[iy] = A[iy + numSent * ny];
}
MPI_Send(&buffer, ny, MPI_DOUBLE, sender, numSent+1,
MPI_COMM_WORLD);
numSent += 1;
}
else
MPI_Send(MPI_BOTTOM, 0, MPI_DOUBLE, sender, 0, MPI_COMM_WORLD);
}
for(ix = 0; ix < nx; ++ix){
std::cout << c[ix] << " ";
}
std::cout << std::endl;
delete [] A;
delete [] b;
delete [] c;
delete [] buffer;
}
else{
MPI_Bcast(&b, ny, MPI_DOUBLE, root, MPI_COMM_WORLD);
if(myGlobalID <= nx){
while(1){
MPI_Recv(&buffer, ny, MPI_DOUBLE, root, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if(status.MPI_TAG == 0) break;
int row = status.MPI_TAG - 1;
ans = 0.0;
for(iy = 0; iy < ny; ++iy) ans += buffer[iy] * b[iy];
MPI_Send(&ans, 1, MPI_DOUBLE, root, row, MPI_COMM_WORLD);
}
}
}
MPI_Finalize();
return 0;
} // main
void get_ixiy(int k){
ix = k / ny;
iy = k % ny;
}
The error information is listed below.
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 7455 RUNNING AT ***
= EXIT CODE: 11
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault:
11 (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions
Upvotes: 3
Views: 1310
Reputation: 1043
There are several problems in your code, that you have to fix first.
First, You want to access an element of b[]
which do not exist, in this for loop:
for(int k = 0; k < nxny; ++k){
get_ixiy(k);
b[k] = 1; // WARNING: this is an error
A[k] = k;
}
Second, You are deleting allocated memory only for the root process. This causes a memory leak:
if(myGlobalID == root){
// ...
delete [] A;
delete [] b;
delete [] c;
delete [] buffer;
}
You have to to delete the allocated memory for all processes.
Third, You have a useless function void get_ixiy(int);
that changes global variables ix, iy. It is useless because, after calling this function, you never use ix, iy until you change them manually. See here:
for(ix = 0; ix < min(numProc - 1, nx); ++ix){
for(iy = 0; iy < ny; ++iy){
// ...
}
}
Fourth, you are using MPI_Send()
and MPI_Recv()
in a completely wrong way. You are lucky that you don't get more errors.
Upvotes: 1