Reputation: 626
In Julia, if I have two variables,
i = 100::Int64
x = 33.333::Float64
If I want variable y which is
y = i*x
Now my question is, since it is a calculation mixed with Int64 and Float64, will it be slow? I mean, like in Fortran, those stuff are compiled first then run, so during the compiling stage I guess the conversion has already be done. But in Julia, since there is no compiling, basically it just run. I heard that if there are a lot of these conversions the code can be slow. Is it still true for the Julia nowadays?
If those type conversions things can slow down the Julia code, what should I do to prevent those type conversions and still make the code as fast as possible?
Thanks! I am sorry if the questions are very naïve.
Upvotes: 3
Views: 144
Reputation: 152
Let's do a test:
julia> using BenchmarkTools
julia> @btime [100*33.33 for i in 1:100000000]
218.062 ms (2 allocations: 762.94 MiB)
julia> @btime [100.0*33.33 for i in 1:100000000]
218.326 ms (2 allocations: 762.94 MiB)
They're pretty much the same.
in Julia, since there is no compiling
There is compiling in Julia, we can look at the machine code generated:
julia> code_llvm(+, (Int, Float64))
; @ promotion.jl:321 within `+'
define double @"julia_+_807"(i64 signext %0, double %1) {
top:
; ┌ @ promotion.jl:292 within `promote'
; │┌ @ promotion.jl:269 within `_promote'
; ││┌ @ number.jl:7 within `convert'
; │││┌ @ float.jl:94 within `Float64'
%2 = sitofp i64 %0 to double
; └└└└
; @ promotion.jl:321 within `+' @ float.jl:326
%3 = fadd double %2, %1
; @ promotion.jl:321 within `+'
ret double %3
}
julia> code_llvm(+, (Float64, Float64))
; @ float.jl:326 within `+'
define double @"julia_+_796"(double %0, double %1) {
top:
%2 = fadd double %0, %1
ret double %2
}
So there are conversions going on at compile time. BUT... the compiler handles these things really well, its nothing to worry about.
Upvotes: 3