leo
leo

Reputation: 557

Segmentation fault when enable $OMP DO loop

I am trying to modifying legacy code to initialize array with openmp. However, I encounter Segmentation fault when enabling $OMP DO derivatives in the following code sections. Would you please point out what might be wrong?

I am using fortran and compile with gfortran and variables are declared as common variables

   common/quant/keosc,vosc,rosc,frt,grt,dipole,v_solv
   common/quant_avg/frt_avg,grt_avg,d_coup,rv_avg,b_avg

    !$OMP PARALLEL 
!$OMP DO private(m,j,l,mp) firstprivate(nstates,natoms) lastprivate(rv_avg,b_avg,grt_avg,frt_avg,d_coup)
      do m = 0, nstates - 1 
         rv_avg(m) = 0d0
         b_avg(m) = 0d0
         do j = 1, 3
            grt_avg(m,j) = 0d0
            do l = 1, natoms
               frt_avg(m,l,j) = 0d0           
               do mp = 0, nstates - 1         
                  d_coup(m,mp,l,j) = 0d0         
               enddo                          
            enddo
         enddo
      enddo
!$OMP END DO
!$OMP END PARALLEL

Upvotes: 0

Views: 355

Answers (3)

Hristo Iliev
Hristo Iliev

Reputation: 74455

The problem is probably that you do not have enough stack space in the OpenMP threads to hold the private copies of all these arrays. Especially d_coup looks like a really big one having 3 x natoms x nstates^2 elements. Most Fortran compilers nowadays automatically resort to using heap allocation for such big arrays but when it comes to (first|last)private variables, some OpenMP compilers, including GCC and Intel Fortran Compiler, always place them on the stack. See my answer here for more information.

Edit: Now I see that M. S. B. has actually linked to that same question in his comment.

Upvotes: 0

High Performance Mark
High Performance Mark

Reputation: 78364

You haven't shown your declaration of the dimensions of any of the arrays so I speculate that the lines

do m = 0, nstates - 1 
     rv_avg(m) = 0d0

write to a non-existent element of rv_avg, that is the element at index 0. Since Fortran programs don't, by default, check that array element accesses are within bounds, this write outside the bounds won't be caught by the run-time. If the write stays within the address space of the program when it executes it won't cause a segmentation fault. Given the common block declarations the 0-th element of rv_avg may well be part of d_coup.

Shake up the mapping of variables to address space by introducing OpenMP and it's easy to believe that the0-th element of rv_avg now lies outside the address space for a thread and causes the segmentation fault.

Since the program makes other references to array elements at 0 any one of them might be at the root of the segmentation fault.

Of course, if you follow @M.S.B.'s advice and use array syntax notation you can avoid out-of-bounds array accesses.

Upvotes: 0

M. S. B.
M. S. B.

Reputation: 29401

Have you measured where the CPU consumption is in your program? It is a waste of effort to speed up portions that don't consume much CPU time. I'd be surprised if array initializations were a high fraction of the CPU usage. The code would be more readable if instead you used array notation, e.g., rv_avg (0:nstates - 1) = 0d0.

Upvotes: 0

Related Questions