hjab
hjab

Reputation: 238

Julia: why doesn't shared memory multi-threading give me a speedup?

I want to use shared memory multi-threading in Julia. As done by the Threads.@threads macro, I can use ccall(:jl_threading_run ...) to do this. And whilst my code now runs in parallel, I don't get the speedup I expected.

The following code is intended as a minimal example of the approach I'm taking and the performance problem I'm having: [EDIT: See later for even more minimal example]

nthreads = Threads.nthreads()
test_size = 1000000
println("STARTED with ", nthreads, " thread(s) and test size of ", test_size, ".")
# Something to be processed:
objects = rand(test_size)
# Somewhere for our results
results = zeros(nthreads)
counts = zeros(nthreads)
# A function to do some work.
function worker_fn()
    work_idx = 1
    my_result = results[Threads.threadid()]
    while work_idx > 0
        my_result += objects[work_idx]
        work_idx += nthreads
        if work_idx > test_size
            break
        end
        counts[Threads.threadid()] += 1
    end
end

# Call our worker function using jl_threading_run
@time ccall(:jl_threading_run, Ref{Cvoid}, (Any,), worker_fn)

# Verify that we made as many calls as we think we did.
println("\nCOUNTS:")
println("\tPer thread:\t", counts)
println("\tSum:\t\t", sum(counts))

On an i7-7700, a typical single threaded result is:

STARTED with 1 thread(s) and test size of 1000000.
 0.134606 seconds (5.00 M allocations: 76.563 MiB, 1.79% gc time)

COUNTS:
    Per thread:     [999999.0]
    Sum:            999999.0

And with 4 threads:

STARTED with 4 thread(s) and test size of 1000000.
  0.140378 seconds (1.81 M allocations: 25.661 MiB)

COUNTS:
    Per thread:     [249999.0, 249999.0, 249999.0, 249999.0]
    Sum:            999996.0

Multi-threading slows things down! Why?

EDIT: A better minimal example can be created @threads macro itself.

a = zeros(Threads.nthreads())
b = rand(test_size)
calls = zeros(Threads.nthreads())
@time Threads.@threads for i = 1 : test_size
    a[Threads.threadid()] += b[i]
    calls[Threads.threadid()] += 1
end

I falsely assumed that the @threads macro's inclusion in Julia would mean that there was a benefit to be had.

Upvotes: 4

Views: 1780

Answers (1)

Bogumił Kamiński
Bogumił Kamiński

Reputation: 69949

The problem you have is most probably false sharing.

You can solve it by separating the areas you write to far enough like this (here is a "quick and dirty" implementation to show the essence of the change):

julia> function f(spacing)
           test_size = 1000000
           a = zeros(Threads.nthreads()*spacing)
           b = rand(test_size)
           calls = zeros(Threads.nthreads()*spacing)
           Threads.@threads for i = 1 : test_size
               @inbounds begin
                   a[Threads.threadid()*spacing] += b[i]
                   calls[Threads.threadid()*spacing] += 1
               end
           end
           a, calls
       end
f (generic function with 1 method)

julia> @btime f(1);
  41.525 ms (35 allocations: 7.63 MiB)

julia> @btime f(8);
  2.189 ms (35 allocations: 7.63 MiB)

or doing per-thread accumulation on a local variable like this (this is a preferred approach as it should be uniformly faster):

function getrange(n)
    tid = Threads.threadid()
    nt = Threads.nthreads()
    d , r = divrem(n, nt)
    from = (tid - 1) * d + min(r, tid - 1) + 1
    to = from + d - 1 + (tid ≤ r ? 1 : 0)
    from:to
end

function f()
    test_size = 10^8
    a = zeros(Threads.nthreads())
    b = rand(test_size)
    calls = zeros(Threads.nthreads())
    Threads.@threads for k = 1 : Threads.nthreads()
        local_a = 0.0
        local_c = 0.0
        for i in getrange(test_size)
            for j in 1:10
                local_a += b[i]
                local_c += 1
            end
        end
        a[Threads.threadid()] = local_a
        calls[Threads.threadid()] = local_c
    end
    a, calls
end

Also note that you are probably using 4 treads on a machine with 2 physical cores (and only 4 virtual cores) so the gains from threading will not be linear.

Upvotes: 11

Related Questions