AboAmmar
AboAmmar

Reputation: 5559

Generic Float type in Julia language?

My question is simple, is there a generic Float type in julia? For the case of Integers, for example, one can simply write Int and it will be translated as Int32 in 32-bit systems or Int64 in 64-bit systems. For Floats though, see the example function below:

function bb(n)
    b = Array{Float64}(n) 
    b[1] = 0.9999
    for i = 2:n
        @inbounds b[i] = b[i-1] * 0.9999
    end    
    println(b[n])
end

bb(10^3)
@time bb(10^3)
@time bb(10^8)

It gives the following timing results along with total memory allocations:

0.9048328935585562
0.9048328935585562
  0.000100 seconds (135 allocations: 15.750 KB)
2.4703e-320
  3.230642 seconds (14 allocations: 762.940 MB, 1.51% gc time)

Now change the first line to b = Array{AbstractFloat}(n) and see the ridiculously huge timings and memory allocations:

0.9048328935585562
0.9048328935585562
  0.003564 seconds (2.13 k allocations: 46.953 KB)
2.4703e-320
  351.068176 seconds (200.00 M allocations: 3.725 GB, 0.74% gc time)

There is nothing as b = Array{Float}(n) I can use, the only solution I came up with is this non-elegant notation b = Array{typeof(1.0)}(n).

Upvotes: 3

Views: 1992

Answers (1)

Frames Catherine White
Frames Catherine White

Reputation: 28232

Your issue with Abstract Float is not to do with 32 Bit or 64 Bit.

Julia has no Float that matches Int because floating-point literals are always Float64.
I.e. On 64bit or 32bit systems typeof(1.0)==Float64. (for a Float32 literal use 1.0f0)

If you really wanted one you would need to define it as

@static if Sys.WORDSIZE == 64
    const Float = Float64
else
    const Float = Float32
end

But this doesn't seem useful, since it wouldn't correspond to anything. It wouldn't be a closer map to hardware even because the implementation of Floating point math in the CPU, is (generally) 64bit, even on 32Bit CPUs.

See: this thread on DiscourseFloat type like Int type.

There is no advantage to using Float32 on a 32 bit system, and Float64 on a 64 bit system. There are advantages to using Float32 everywhere -- if you don't need the accuracy -- the key one being halving memory allocation -- which halves the time spent allocating, and the time spent on inter-process communication.

On to your performance issues:

The poor performance of b = Array{AbstractFloat}(n) is because you are creating an container holding an abstract type. See performance tips: Avoid Containers with Abstract type parameters These are slow, because they are basically arrays of pointers -- the pointer needs to be dereferenced each time an element is interacted with. This is because such an array b has been declared to possibly contain elements of any number of different types. Some could be Float16 others could be Float32, some might even be BigFloats or an ArbFloat{221}. So containers containing abstract types are slow to work with, because they are arrays of pointers.

So writing b = Array{typeof(1.0)}(n) is exactly equivalent to writing b = Array{Float64}(n) Nothing is being done there. So that is clearly not a solution to what your true problem is.

Assuming your true problem is that you want to specify the type returned then you should pass it in as a parameter:

function bb(T, n)
    b = Array{T}(n) 
    b[1] = 0.9999
    for i = 2:n
        @inbounds b[i] = b[i-1] * 0.9999
    end    
    println(b[n])
end

Call that with (for example) bb(Float32, 100). All the math is happening in Float64 since it is specified using Float64 literals, but when you assigned it to the array, convert(T,...) will implictly be called.

Alternatively you might want to pass in the values, and infer the type:

function bb(n, b1::T)
    b = Array{T}(n) 
    b[1] = b1
    for i = 2:n
        @inbounds b[i] = b[i-1] * b1
    end    
    println(b[n])
end

Call that with: bb(100, 0.9999f0)

Upvotes: 11

Related Questions