Reputation: 23
I am new to MPI, and I am currently working on a project that requires me to do array analysis on my local beowulf cluster. My code is written in C, and it compiles correctly. It runs correctly when only using a single process, but when I try to run it with multiple processes every process besides the root (rank 0) tends to die right around the point when I try to broadcast my data. My code looks something like this
//1. Initialize global variables
//2. Initialize MPI, get number of processes, get rank
//3. All processes create two dimensional arrays
array1 = (char **) malloc(sizeArray1 * sizeof(char *));
array1[0] = (char *) malloc(sizeArray1 * lineLength * sizeof(char));
for(i = 1; i < sizeArray1; i++)
{
array1[i] = array1[i - 1] + lineLength;
}
//4. Only server will populate it's arrays, then broadcast to all processes
if(rank == 0)
{
f = fopen("path..../testFile1.txt", "r");
if(NULL == f) {
perror("FAILED: ");
return -1;
}
numWords = 0;
while(err != EOF && numWords < sizeArray2)
{
err = fscanf(f, "%[^\n]\n", array2[numWords]);
numWords ++;
}
fclose(f);
}
//5. Broadcast each line from both arrays to all processes
MPI_Bcast(array1, sizeArrray1 * lineLength, MPI_CHAR, 0, MPI_COMM_WORLD);
//6. do further work on arrays
The root node will accomplish all of this perfectly fine, while the other nodes will usually try to broadcast once, print a line, and then die. The exact error that I am getting is
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: 0x37
malloc.c:2392: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.
If you need to look at any other portions of my code let me know
Note: I have edited my code to correspond with suggestions from other users, but the error still persists
Upvotes: 1
Views: 337
Reputation: 8395
So your arrays are made of char
and not int
.
so you should MPI_Bcast()
MPI_CHAR
instead of MPI_INT
.
for example
MPI_Bcast(&(array1[i][0]), lineLength, MPI_CHAR, 0, MPI_COMM_WORLD);
as a matter of style, you can also write this as
MPI_Bcast(array1[i], lineLength, MPI_CHAR, 0, MPI_COMM_WORLD);
Also, you might want to allocate array1
in one chunk, so you can MPI_Bcast()
it with one call (this is generally more efficient)
the allocation would look like
array1 = (char **)malloc(sizeArray1 * sizeof(char *);
array1[0] = (char *)malloc(sizeArray1 * lineLength * sizeof(char));
for (int i=1; i<sizeArray1; i++) array1[i] = array1[i-1] + lineLength;
and then
MPI_Bcast(array1, sizeArray1 * lineLength, MPI_CHAR, 0, MPI_COMM_WORLD);
Upvotes: 1