Reputation: 3
I'm trying to parallelize this code using OpenMP.
for(t_step=0;t_step<Ntot;t_step++) {
// current row
if(cur_row + 1 < Npt_x) cur_row++;
else cur_row = 0;
// get data from file which update only the row "cur_row" of array val
read_line(f_u, val[cur_row]);
// computes
for(i=0;i<Npt_x;i++) {
for(j=0;j<Npt_y;j++) {
i_corrected = cur_row - i;
if(i_corrected < 0) i_corrected = Npt_x + i_corrected;
R[i][j] += val[cur_row][0]*val[i_corrected][j]/Ntot;
}
}
}
with
- val and R declared as **double,
- Npt_x and Npt_y are about 500,
- Ntot is about 10^6.
I've done this
for(t_step=0;t_step<Ntot;t_step++) {
// current row
if(cur_row + 1 < Npt_x) cur_row++;
else cur_row = 0;
// get data from file which update only the row "cur_row" of array val
read_line(f_u, val[cur_row]);
// computes
#pragma omp parallel for collapse(2), private(i,j,i_corrected)
for(i=0;i<Npt_x;i++) {
for(j=0;j<Npt_y;j++) {
i_corrected = cur_row - i;
if(i_corrected < 0) i_corrected = Npt_x + i_corrected;
R[i][j] += val[cur_row][0]*val[i_corrected][j]/Ntot;
}
}
}
The problem is that it doesn't seem to be efficient. Is there a way to use OpenMP more efficiently in this case ?
Many thks
Upvotes: 0
Views: 188
Reputation: 9489
Right now, I would try something like this:
for(t_step=0;t_step<Ntot;t_step++) {
// current row
if(cur_row + 1 < Npt_x)
cur_row++;
else
cur_row = 0;
// get data from file which update only the row "cur_row" of array val
read_line(f_u, val[cur_row]);
// computes
#pragma omp parallel for private(i,j,i_corrected)
for(i=0;i<Npt_x;i++) {
i_corrected = cur_row - i;
if(i_corrected < 0)
i_corrected += Npt_x;
double tmp = val[cur_row][0]/Ntot;
#if defined(_OPENMP) && _OPENMP > 201306
#pragma omp simd
#endif
for(j=0;j<Npt_y;j++) {
R[i][j] += tmp*val[i_corrected][j];
}
}
}
However, since the code will be memory bound, that's not sure it'll get you much parallel speed-up... Worth a try though.
Upvotes: 1