Reputation: 608
I'm implementing Newton's method for finding roots in a precompiled library. The common case will be to work with functions from Float64
to Float64
, and I want an optimized compiled version of it to exist in the library. I will, of course, implement a type generic version, too, but Julia will need some way of differentiating the methods by signature so it knows which one to call at runtime. Presently, the implementation is:
#Float64 optimized version
function findrootNewton( func, funcder, guess::Float64,
rtol::Float64=1e-12, abstol::Float64=1e-12, maxiter::Int=100 )
#make sure tolerances are positive
if rtol <= 0.0
error("findrootNewton: rtol must be a positive number")
end
if abstol <= 0.0
error("findrootNewton: abstol must be a positive number")
end
if maxiter <= 0.0
error("findrootNewton: maxiter must be a positive integer")
end
converged::Bool = false
oldx::Float64 = guess
newx::Float64 = oldx - ((func(oldx) / funcder(oldx))::Float64)
absdiff = abs(oldx - newx)
iter = 2
while (absdiff < abstol || absdiff < rtol * abs(newx)) && iter <= maxiter
oldx = newx
newx = oldx - func(oldx) / funcder(oldx)
absdiff = abs(oldx - newx)
iter += 1
end #while (absdiff < abstol || absdiff < rtol * abs(newx)) && newxiter <= maxiter
if iter <= maxiter
converged = true
end
return (newx, converged)
end #findzeroNewton
#Generic version
function findrootNewton( func, funcder, guess::Number,
rtol::Real=1e-12, abstol::Real=1e-12, maxiter::Int=100 )
#make sure tolerances are positive
if rtol <= 0
error("findrootNewton: rtol must be a positive number")
end
if abstol <= 0
error("findrootNewton: abstol must be a positive number")
end
if maxiter <= 0
error("findrootNewton: maxiter must be a positive integer")
end
converged::Bool = false
newx = oldx - func(oldx) / funcder(oldx)
oldx = convert(typeof(newx), guess)
absdiff = abs(oldx - newx)
iter = 2
while (absdiff < abstol || absdiff < rtol * abs(newx)) && iter <= maxiter
oldx = newx
newx = oldx - func(oldx) / funcder(oldx)
absdiff = abs(oldx - newx)
iter += 1
end #while (absdiff < abstol || absdiff < rtol * abs(newx)) && newxiter <= maxiter
if iter <= maxiter
converged = true
end
return (newx, converged)
end #findzeroNewton
This has not been used/debugged, yet. For example, I'm not checking for the derivative hitting zero because the particular case I'm coding this for does not need it.
Notice that if guess
, rtol
, and abstol
arguments are given as Float64
, but the functions don't return Float64
but BigFloat
, then the code will fail at the point of the type assertion in the definition of newx
, even though there is a method appropriate to generic functions available. Is there a way to avoid this problem?
Edit: I can specify the type of data a variable stores in Julia, for example:
x::Float64 = 2.5
Is it possible to similarly specify the signature of a variable that can store (pointers to) functions?
Upvotes: 6
Views: 10074
Reputation: 149
There are absolutely cases when it would be useful to be able to specify the type of a function. The example I have found is when the function is embedded in a struct
. Below I give an example where simply changing value = ...
to value::Float64 = ...
I get a 3x improvement in run time and a 2x improvement in allocations.
using BenchmarkTools
struct ThingWithCallback
call_back::Function
end
function apply_call_back(f, data::Vector{Float64})
for (i, value) in enumerate(data)
value = f.call_back(value)
data[i] *= value
end
end
function apply_call_back_with_type(f, data::Vector{Float64})
for (i, value) in enumerate(data)
value::Float64 = f.call_back(value)
data[i] *= value
end
end
display(@benchmark apply_call_back(f, data) setup=(f=ThingWithCallback(x -> 2 * x); data=rand(1000)))
display(@benchmark apply_call_back_with_type(f, data) setup=(f=ThingWithCallback(x -> 2 * x); data=rand(1000)))
@benchmark apply_call_back_no_type(f, data) setup=(f=ThingWithCallback(x -> 2 * x); data=rand(1000))
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 37.900 μs … 516.300 μs ┊ GC (min … max): 0.00% … 87.84%
Time (median): 39.500 μs ┊ GC (median): 0.00%
Time (mean ± σ): 41.316 μs ± 18.195 μs ┊ GC (mean ± σ): 1.68% ± 3.67%
Memory estimate: 70.14 KiB, allocs estimate: 4489.
@benchmark apply_call_back_with_type(f, data) setup=(f=ThingWithCallback(x -> 2 * x); data=rand(1000))
BenchmarkTools.Trial: 10000 samples with 1 evaluation.
Range (min … max): 11.800 μs … 875.600 μs ┊ GC (min … max): 0.00% … 96.78%
Time (median): 13.200 μs ┊ GC (median): 0.00%
Time (mean ± σ): 14.382 μs ± 21.465 μs ┊ GC (mean ± σ): 4.11% ± 2.75%
Memory estimate: 31.25 KiB, allocs estimate: 2000.
It turns out that you can do even better by using FunctionWrappers.jl
, further improving the run time and with no allocations.
import FunctionWrappers.FunctionWrapper
struct ThingWithWrappedCallback
call_back::FunctionWrapper{Float64, Tuple{Float64}}
end
display(@benchmark apply_call_back(f, data) setup=(f=ThingWithWrappedCallback(x -> 2 * x); data=rand(1000)))
BenchmarkTools.Trial: 10000 samples with 6 evaluations.
Range (min … max): 5.033 μs … 60.517 μs ┊ GC (min … max): 0.00% … 0.00%
Time (median): 5.267 μs ┊ GC (median): 0.00%
Time (mean ± σ): 5.526 μs ± 1.361 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
Memory estimate: 0 bytes, allocs estimate: 0.
Upvotes: 1
Reputation: 8512
This is not the way to write Julia code. You are writing Julia as if it was a statically typed language. It is an easy mistake to make because Julia "looks" a lot like a statically typed language. To get performance in Julia the important thing is not annotating with types, but achieving type stability.
That means writing the code so that upon executing a piece of code, the types of variables don't change. Julia gives you a number of functions and facilities to aid you in this, such as zero
, iszero
, similar
etc.
Just use your generic function, it will have the same performance as the "specialized" one. The point of specialized functions in Julia for particular types is only when you need a different algorithm. For instance intersect(circle, triangle)
requires different code from intersect(circle, circle)
. However you would not write a different method for circles using 32 bit floating point and those using 64 bit floating point numbers.
To give give some concrete advice, let me comment on some of the code you have written.
if rtol <= 0.0
error("findrootNewton: rtol must be a positive number")
end
For the generic version it is better to write one of these versions in prioritized order:
if rtol <= zero(rtol)
if rtol <= zero(T)
where T
is the type of rtol
if rtol <= 0
Because that makes sure that you are comparing number of equal type, thus avoiding type conversions. Integer 0
is better than floating point 0.0
because it will typically cause less type conversions/promotions.
Just to nitpick you may want to use throw(DomainError(rtol, "findrootNewton: rtol must be a positive number"))
to indicate one of the function arguments are outside of the domain.
I can specify the type of data a variable stores in Julia, for example
x::Float64 = 2.5
This is unnecessary and the wrong way to think about what you are doing. Remember Julia is not a statically typed language. If you are a C/C++ developer you may think about variables as little memory boxes of different size, which can hold a float, integer or boolean. This assignment then means putting 2.5 into a 64 bit floating point box.
However that is not what is happening in a dynamic language like Julia. Conceptually what happens is that you create a floating point object 2.5
and you stick a label x
on it. Assignment is in a way the reverse of what you may be thinking. You are not assigning a number to a box named x
. Rather you are sticking a label x
onto a number 2.5
. So writing stuff like:
converged::Bool = false
is totally unnecessary. You are not making sure false
is put into a boolean sized box. There is no box. Instead you are sticking the label converged
on a false
object and then afterwards you are asserting that the label converged
is attached to a boolean object. This gives you no performance or memory advantage.
You only do this line in your generic version, and presumable you assume this is less performant than what you do in your Float64 version.
oldx = convert(typeof(newx), guess)
However the call to convert
and typeof
does not make any difference. In your ideal case where all the types match up, these calls just get optimized away. Look at this simple example:
julia> foobar(a, b) = convert(typeof(a), a + b)
foobar
julia> @code_warntype foobar(1, 1)
Body::Int64
1 ─ %1 = (Base.add_int)(a, b)::Int64
└── return %1
julia> @code_warntype foobar(1.0, 1.0)
Body::Float64
1 ─ %1 = (Base.add_float)(a, b)::Float64
└── return %1
You can see when the types match, this just gets reduced to a simple integer or floating point add by the Julia JIT.
If you are not sure about the performance implications of writing functions one way or another, I advice you to get used to using the @code_warntype
, @code_llvm
and @code_native
macros. They give you valuable information about how Julia transforms your code given particular sets of arguments.
As for your question about whether you can create type assertions for function signatures. You cannot do that in Julia presently. In your case it is not needed.
However workarounds typically involve using some trait based approach. You could consider turning the arguments of your Newton method into a type, and then have your input dispatch on that.
Upvotes: 4
Reputation: 8044
For clarity I will write my comment as an answer: You don't need to specialize function signatures on types in Julia, unless it is to implement a specialized handling in the function body. Argument type assertion has no impact on code speed or compilability. See http://docs.julialang.org/en/latest/manual/performance-tips/
Type assertions in function arguments in julia are mainly used to control multiple dispatch, i.e. different function behaviour for different types of the input arguments. When not asserting a type, the compiler will automatically compile a type-specialized version for each combination of input arguments.
If you need for another reason, e.g. to ensure type stability, to assert that the return type of a function is the same as the input, you can do
function foo(x::T)::T where T
...
end
Upvotes: 8