Reputation: 87
I have a question about how to use parallel computing in Julia
Following codes do not work
using Distributed
addprocs(10)
@everywhere include("ADMM2.jl")
@everywhere tuning = [0.04, 0.5, 0.1]
@everywhere include("Basicsetting.jl")
@everywhere using SharedArrays
## generate samples
n_simu = 10
Z_set = SharedArray{Float64, 3}(n, r, n_simu)
X_set = SharedArray{Float64, 3}(n, p, n_simu)
Y_set = SharedArray{Float64, 3}(n, q, n_simu)
Binit_set = SharedArray{Float64, 3}(p, r, n_simu)
Ginit_set = SharedArray{Float64, 3}(p, r, n_simu)
for i in 1:n_simu
dataset = get_data(fun_list, n, p, q, B_true, G_true, snr, binary = false)
Z_set[:,:,i] = dataset[:Z_scaled]
X_set[:,:,i] = dataset[:X]
Y_set[:,:,i] = dataset[:Y]
ridge = get_B_ridge(dataset[:Z_scaled], dataset[:X], dataset[:Y], lambda=0.03)
Binit_set[:,:,i] = ridge[:B]
Ginit_set[:,:,i] = ridge[:G]
end
## optimization process
@sync @distributed for i in 1:n_simu
Z = Z_set[:,:,i]
X = X_set[:,:,i]
Y = Y_set[:,:,i]
B = copy(Binit_set[:,:,i])
G = copy(Ginit_set[:,:,i])
result2[i] = get_BG_ADMM3(Z,X,Y,B,G, lambda1=0.05, lambda2=0.2, lambda3=0.05, rho=1.0,
control1 = Dict(:max_iter => 5e1, :tol => 1e-4, :rounding => 0.0),
control2 = Dict(:elesparse_B => true, :lowrank_G => true, :elesparse_G => false, :rowsparse_G => true))
end
Without using distributed, the for loop hasn't any problem operating.
Upvotes: 1
Views: 345
Reputation: 42194
You are not collecting any results in the for
loop.
Please note that each variable in a for loop will be created on a different worker process of the Julia cluster.
Normally the best strategy is to used an aggregator function to collect the results (for most scenarios I would also prefer such approach over SharedArrays
):
result = @distributed (append!) for i in 1:10
res = rand()+i
[res]
end
BTW, use @view
s - you are now creating unnecessary copies of your matrices
Upvotes: 1