Mike
Mike

Reputation: 67

Fortran precision default with/out compiler flag

I'm having trouble with the precision of constant numerics in Fortran.

Do I need to write every 0.1 as 0.1d0 to have double precision? I know the compiler has a flag such as -fdefault-real-8 in gfortran that solves this kind of problem. Would it be a portable and reliable way to do? And how could I check if the flag option actually works for my code?

I was using F2py to call Fortran code in my Python code, and it doesn't report an error even if I give an unspecified flag, and that's what's worrying me.

Upvotes: 5

Views: 4338

Answers (3)

Scientist
Scientist

Reputation: 1835

Adding to @francescalus's response above, in my opinion, since the double precision definition can change across different platforms and compilers, it is a good practice to explicitly declare the desired kind of the constant using the standard Fortran convention, like the following example:

program test
    use, intrinsic :: iso_fortran_env, only: RK => real64
    implicit none
    write(*,"(*(g20.15))") "real64: ", 2._RK / 3._RK
    write(*,"(*(g20.15))") "double precision: ", 2.d0 / 3.d0
    write(*,"(*(g20.15))") "single precision: ", 2.e0 / 3.e0
end program test

Compiling this code with gfortran gives:

$gfortran -std=gnu *.f95 -o main
$main
            real64: .666666666666667    
  double precision: .666666666666667    
  single precision: .666666686534882    

Here, the results in the first two lines (explicit request for 64-bit real kind, and double precision kind) are the same. However, in general, this may not be the case and the double precision result could depend on the compiler flags or the hardware, whereas the real64 kind will always conform to 64-bit real kind computation, regardless of the default real kind.

Now consider another scenario where one has declared a real variable to be of kind 64-bit, however, the numerical computation is done in 32-bit precision,

program test
    use, intrinsic :: iso_fortran_env, only: RK => real64
    implicit none
    real(RK) :: real_64
    real_64 = 2.e0 / 3.e0
    write(*,"(*(g30.15))") "32-bit accuracy is returned: ", real_64
    real_64 = 2._RK / 3._RK
    write(*,"(*(g30.15))") "64-bit accuracy is returned: ", real_64
end program test

which gives the following output,

$gfortran -std=gnu *.f95 -o main
$main
 32-bit accuracy is returned:          0.666666686534882    
 64-bit accuracy is returned:          0.666666666666667    

Even though the variable is declared as real64, the results in the first line are still wrong, in the sense that they do not conform to double precision kind (64-bit that you desire). The reason is that the computations are first done in the requested (default 32-bit) precision of the literal constants and then stored in the 64-bit variable real_64, hence, getting a different result from the more accurate answer on the second line in the output.

So the bottom-line message is: It is always a good practice to explicitly declare the kind of the literal constants in Fortran using the "underscore" convention.

Upvotes: 4

johncampbell
johncampbell

Reputation: 203

The answer to your question is : Yes you do need to indicate the constant is double precision. Using 0.1 is a common example of this problem, as the 4-byte and 8-byte representations are different. Other constants (eg 0.5) where the extended precision bytes are all zero don't have this problem.

This was introduced into Fortran at F90 and has caused problems for conversion and reuse of many legacy FORTRAN codes. Prior to F90, the result of double precision a = 0.1 could have used a real 0.1 or double 0.1 constant, although all compilers I used provided a double precision value. This can be a common source of inconsistent results when testing legacy codes with published results. Examples are frequently reported, eg PI=3.141592654 was in code on a forum this week.

However, using 0.1 as a subroutine argument has always caused problems, as this would be transferred as a real constant.

So given the history of how real constants have been handled, you do need to explicitly specify a double precision constant when it is required. It is not a user friendly approach.

Upvotes: 0

francescalus
francescalus

Reputation: 32366

In a Fortran program 1.0 is always a default real literal constant and 1.0d0 always a double precision literal constant.

However, "double precision" means different things in different contexts.

In Fortran contexts "double precision" refers to a particular kind of real which has greater precision than the default real kind. In more general communication "double precision" is often taken to mean a particular real kind of 64 bits which matches the IEEE floating point specification.

gfortran's compiler flag -fdefault-real-8 means that the default real takes 8 bytes and is likely to be that which the compiler would use to represent IEEE double precision.

So, 1.0 is a default real literal constant, not a double precision literal constant, but a default real may happen to be the same as an IEEE double precision.

Questions like this one reflect implications of precision in literal constants. To anyone who asked my advice about flags like -fdefault-real-8 I would say to avoid them.

Upvotes: 6

Related Questions