Reputation:
I am doing a code on Gauss Seidel using OpenMP to solve a system of linear equations. But I am not getting the correct output. The output comes correct if n is set to 1. The algorithm seems right to me though. Can anyone help?
Code:
int seidel_p(double *A, double *b, double *x, int n)
{
double *dx;
dx = (double*) calloc(n,sizeof(double));
int epsilon = 1.0e-4;
int i,j,k,id;
double dxi;
double sum;
for(int i =0; i<n ; i++)
{
x[i] = 0;
}
int maxit = 2*n*n;
for(k=0; k<maxit; k++)
{
for(i=0; i<n; i++)
{
dx[i] = b[i];
#pragma omp parallel shared(A,x)
{
for(j=0; j<n; j++)
{
if(i!=j)
{
dxi += A[i*n+j]*x[j];
}
}
#pragma omp critical
dx[i] -= dxi;
}
dx[i] /= A[i*n +i];
x[i] = dx[i];
sum += ( (dx[i] >= 0.0) ? dx[i] : -dx[i]);
if(sum<= epsilon) break;
}
}
for( int i = 0 ; i<n ; i++ )
{
printf("\t\t%f\n", dx[i]);
}
free(dx);
}
Upvotes: 0
Views: 899
Reputation: 149
Unless you have a VERY good reason to do this yourself, don't. There are lots of libraries that do this (and more). Eigen is a header only library, check it out.
http://eigen.tuxfamily.org/dox/group__TutorialLinearAlgebra.html
You could also use BLAS / LAPACK of which there are many implementations.
https://github.com/Reference-LAPACK/lapack (this implementation is not particularly optimized).
If you insist on rolling your own implementation the first thing you should check is if works with just a single thread. You can try this by setting the environment variable OMP_NUM_THREADS
equal to 1 or by callin omp_set_num_threads(1)
. If that works then you know your algorithm is correct it is just your use of OpenMP that is broken.
Upvotes: 1