Reputation: 14521
Flux.jl provides a helpful train!
function which when paired with the @epoch
macro, can serve as the main training loop. However, unlike most custom training loops, there is no output info as to the accuracy / loss of the model during each training epoch. The train!
function does provide an optional call back which seems like it could be used to show the training accuracy but I am unsure how I would do this. Is it possible to get these values using @epoch
and train!
or do I have to write a custom training loop?
Upvotes: 1
Views: 511
Reputation: 28212
There are many ways (for better or worse Julia is a TIMTOWTDI language after all). I think even a few packages (I leave suggesting those to someone who knows more so those) But I would warn you against fearing custom loops
Custom training loop in Flux isn't as hard nor as advanced a feature as it sounds. They are good and idiomatic.
Julia has fast loops. So we can just make use of them directly.. And insert what we need where we need it, by writing normal code. Rather than having to use complicated and less clear callbacks etc
You can find docs on them here
Upvotes: 2
Reputation: 2580
One pattern is that you can construct the loss function as a do
block. This anonymous function becomes the first argument of train!
. It can contain whatever printing or logging you want, before returning the loss, which is then used for computing the gradient.
julia> m = Dense(ones(2,2));
julia> Flux.@epochs 3 Flux.train!(params(m), ([1,2], [3,4]), Descent(0.05)) do d
res = m(d)
@show res[1] # intermediate value just printed
tot = sum(res)
@show tot # final value is used for the gradient
end
[ Info: Epoch 1
res[1] = 3.0
tot = 6.0
res[1] = 6.3999999999999995
tot = 12.799999999999999
[ Info: Epoch 2
res[1] = 2.0999999999999996
tot = 4.199999999999999
res[1] = 4.499999999999999
tot = 8.999999999999998
[ Info: Epoch 3
res[1] = 1.2
tot = 2.4
res[1] = 2.599999999999999
tot = 5.199999999999998
Upvotes: 2