Adrian
Adrian

Reputation: 903

Julia parallelism: @distributed (+) slower than serial?

After seeing a couple tutorials on the internet on Julia parallelism I decided to implement a small parallel snippet for computing the harmonic series.

The serial code is:

harmonic = function (n::Int64)
    x = 0
    for i in n:-1:1 # summing backwards to avoid rounding errors
        x +=1/i
    end
    x
end

And I made 2 parallel versions, one using @distributed macro and another using the @everywhere macro (julia -p 2 btw):

@everywhere harmonic_ever = function (n::Int64)
    x = 0
    for i in n:-1:1
        x +=1/i
    end
    x
end

harmonic_distr = function (n::Int64)
    x = @distributed (+) for i in n:-1:1
        x = 1/i
    end
    x
end

However, when I run the above code and @time it, I don't get any speedup - in fact, the @distributed version runs significantly slower!

@time harmonic(10^10)
>>> 53.960678 seconds (29.10 k allocations: 1.553 MiB) 23.60306659488827
job = @spawn harmonic_ever(10^10)
@time fetch(job)
>>> 46.729251 seconds (309.01 k allocations: 15.737 MiB) 23.60306659488827
@time harmonic_distr(10^10)
>>> 143.105701 seconds (1.25 M allocations: 63.564 MiB, 0.04% gc time) 23.603066594889185

What completely and absolutely baffles me is the "0.04% gc time". I'm clearly missing something and also the examples I saw weren't for 1.0.1 version (one for example used @parallel).

Upvotes: 3

Views: 913

Answers (1)

carstenbauer
carstenbauer

Reputation: 10127

You're distributed version should be

function harmonic_distr2(n::Int64)
    x = @distributed (+) for i in n:-1:1
        1/i # no x assignment here
    end
    x
end

The @distributed loop will accumulate values of 1/i on every worker an then finally on the master process.

Note that it is also generally better to use BenchmarkTools's @btime macro instead of @time for benchmarking:

julia> using Distributed; addprocs(4);

julia> @btime harmonic(1_000_000_000); # serial
  1.601 s (1 allocation: 16 bytes)

julia> @btime harmonic_distr2(1_000_000_000); # parallel
  754.058 ms (399 allocations: 36.63 KiB)

julia> @btime harmonic_distr(1_000_000_000); # your old parallel version
  4.289 s (411 allocations: 37.13 KiB)

The parallel version is, of course, slower if run only on one process:

julia> rmprocs(workers())
Task (done) @0x0000000006fb73d0

julia> nprocs()
1

julia> @btime harmonic_distr2(1_000_000_000); # (not really) parallel
  1.879 s (34 allocations: 2.00 KiB)

Upvotes: 6

Related Questions