Pradeep Kumar Jha
Pradeep Kumar Jha

Reputation: 877

performance of allocatable vs. statically sized arrays

I have a fortran code I had to modify to include a new library. Initially in the code the size of an array was passed in the Makefile, which meant every time I wanted to change the size of array I had to recompile the code. I changed this to read the size of the input array from an "input parameters file" so that it avoids the need to recompile every time. However, due to various reasons, my code is much slower than before.

Talking to my boss, he was of the opinion it might be possible that because we are not passing the size of the array during compile time, the code is not well optimized. Is it possibly true?

Thanks

---------------Edit---------------------

Initially there were these line in the makefile

NL    = 8
@echo Making $(SIZE_FILE) .....
echo "      integer, parameter( nl = " $(NL) " )" > $(SIZE_FILE)

This created a "sizefile" with value of "NL". This file was "include"d in the main program at as the header and then arrays were declared like this in the fortran file:

 include "sizefile"
 real*8, dimension   ur(nl)

Now I have declared a subroutine called "read_input_parameters" which is called by the program which reads a text file with the value of "Nl". And then I allocate the array like this:

  program   test

  integer n
  allocatable :: ur(:)

  call  read_input_parameters(n)

  allocate(ur(n))

  *operations*

  deallocate(ur)
  stop
  end

Upvotes: 0

Views: 238

Answers (1)

You should use a profiler and find the operations that are slow and post their code. The code you showed is useless. Are the results correct, at least?

The slowness can be caused by many factors. One of them is bad argument passing, which makes copy-in / copy-out necessary. Also, the fact that the subroutine does not know if the array is contiguous can do some harm, but not much.

Upvotes: 1

Related Questions