Reputation: 101
I need to adapt this part of a very long code to mpi in c.
for (i = 0; i < total; i++) {
sum = A[next][0][0]*B[i][0] + A[next][0][1]*B[i][1] + A[next][0][2]*B[i][2];
next++;
while (next < last) {
col = column[next];
sum += A[next][0][0]*B[col][0] + A[next][0][1]*B[col][1] + A[next][0][2]*B[col][2];
final[col][0] += A[next][0][0]*B[i][0] + A[next][1][0]*B[i][1] + A[next][2][0]*B[i][2];
next++;
}
final[i][0] += sum;}
And I was thinking of code like this:
for (i = 0; i < num_threads; i++) {
for (j = 0; j < total; j++) {
check_thread[i][j] = false;
}
}
part = total / num_threads;
for (i = thread_id * part; i < ((thread_id + 1) * part); i++) {
sum = A[next][0][0]*B[i][0] + A[next][0][1]*B[i][1] + A[next][0][2]*B[i][2];
next++;
while (next < last) {
col = column[next];
sum += A[next][0][0]*B[col][0] + A[next][0][1]*B[col][1] + A[next][0][2]*B[col][2];
if (!check_thread[thread_id][col]) {
check_thread[thread_id][col] = true;
temp[thread_id][col] = 0.0;
}
temp[thread_id][col] += A[next][0][0]*B[i][0] + A[next][1][0]*B[i][1] + A[next][2][0]*B[i][2];
next++;
}
if (!check_thread[thread_id][i]) {
check_thread[thread_id][i] = true;
temp[thread_id][i] = 0.0;
}
temp[thread_id][i] += sum;
}
*
for (i = 0; i < total; i++) {
for (j = 0; j < num_threads; j++) {
if (check_thread[j][i]) {
final[i][0] += temp[j][i];
}
}
}
Then I need to gather all the temporary parts in one, I was thinking of MPI_Allgather
and something like this just before the last two for (where *):
MPI_Allgather(temp, (part*sizeof(double)), MPI_DOUBLE, temp, sizeof(**temp), MPI_DOUBLE, MPI_COMM_WORLD);
But I get an execution error, Is it possible to send and receive in the same variable?, if not, what could be the other solution in this case?.
Upvotes: 1
Views: 256
Reputation: 51553
You are calling the MPI_Allgather with the wrong parameters:
MPI_Allgather(temp, (part*sizeof(double)), MPI_DOUBLE, temp, sizeof(**temp), MPI_DOUBLE, MPI_COMM_WORLD);
Instead you should have (source) :
MPI_Allgather
Gathers data from all tasks and distribute the combined data to all tasks
Input Parameters
sendbuf starting address of send buffer (choice)
sendcount number of elements in send buffer (integer)
sendtype data type of send buffer elements (handle)
recvcount number of elements received from any process (integer)
recvtype data type of receive buffer elements (handle)
comm communicator (handle)
Your sendcount
and recvcount
arguments are both wrong, instead of (part*sizeof(double)) and sizeof(**temp) you should pass the number of elements from the matrix temp
that will be gather by all processes involved.
The matrix can be gather in a single call if that matrix is continuously allocated in memory, if it was created as an array of pointers, then you will have to call MPI_Allgather
for each row of the matrix, or use MPI_Allgatherv instead.
Is it possible to send and receive in the same variable?
Yes, by using the In-place Option
When the communicator is an intracommunicator, you can perform an all-gather operation in-place (the output buffer is used as the input buffer). Use the variable MPI_IN_PLACE as the value of sendbuf. In this case, sendcount and sendtype are ignored. The input data of each process is assumed to be in the area where that process would receive its own contribution to the receive buffer. Specifically, the outcome of a call to MPI_Allgather that used the in-place option is identical to the case in which all processes executed n calls to
MPI_GATHER ( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, recvbuf, recvcount, recvtype, root, comm )
Upvotes: 1