Shayan
Shayan

Reputation: 6295

UndefVarError in Optimization Using JuMP

I'm trying to code an optimization problem in Julia using JuMP. The objective function has two matrix multiplications.
First, multiply the vector of w with size (10) by the matrix of arr_C with size (20, 10). So the w should be transposed to size (1, 10) to perform matrix multiplication.
Second, multiply the vector of w_each_sim with size (20) by the result of the first multiplication, which is also a vector of size (20). So the multiplication should be like (1x20) x (20x1) to achieve a scalar. Please read until the last line of the question because I applied updates according to suggestions. My first try is as follows:

julia> using JuMP, Ipopt

julia> a = rand(20, 10);

julia> b = rand(20); b = b./sum(b)

julia> function port_opt(
            n_assets::Int8,
            arr_C::Matrix{Float64},
            w_each_sim::Vector{Float64})
        """
        Calculate weight of each asset through optimization

        Parameters
        ----------
            n_assets::Int8 - number of assets
            arr_C::Matrix{Float64} - array of C
            w_each_sim::Vector{Float64} - weights of each similar TW

        Returns
        -------
            w_opt::Vector{Float64} - weights of each asset
        """

        model = Model(Ipopt.Optimizer)
        @variable(model, 0<= w[1:n_assets] <=1)
        @NLconstraint(model, sum([w[i] for i in 1:n_assets]) == 1)
        @NLobjective(model, Max, w_each_sim * log10.([w[i]*arr_C[i] for i in 1:n_assets]))
        optimize!(model)

        @show value.(w)
        return value.(w)
    end


julia> port_opt(Int8(10), a, b)
ERROR: UndefVarError: i not defined
Stacktrace:
 [1] macro expansion
   @ C:\Users\JL\.julia\packages\JuMP\Z1pVn\src\macros.jl:1834 [inlined]
 [2] port_opt(n_assets::Int8, arr_C::Matrix{Float64}, w_each_sim::Vector{Float64})
   @ Main e:\MyWork\paperbase.jl:237
 [3] top-level scope
   @ REPL[4]:1

The problem is with the @NLconstraint line. How can I make this code work and get the optimization done?

Aditional tests

As @Shayan suggested, I rectified the objective function as follows:

function Obj(w, arr_C, w_each_sim)

    first_expr = w'*arr_C'
    second_expr = map(first_expr) do x
        log10(x)
    end

    return w_each_sim * second_expr
end

function port_opt(
        n_assets::Int8,
        arr_C::Matrix{Float64},
        w_each_sim::Vector{Float64})
    

    ....
    ....
    @NLconstraint(model, sum(w[i] for i in 1:n_assets) == 1)
    @NLobjective(model, Max, Obj(w, arr_C, w_each_sim))
    optimize!(model)

    @show value.(w)
    return value.(w)
end

a, b = rand(20, 10), rand(20); b = b./sum(b);
port_opt(Int8(10), a, b)

# Threw this:
ERROR: Unexpected array VariableRef[w[1], w[2], w[3], w[4], w[5], w[6], w[7], w[8], w[9], w[10]] in nonlinear expression. Nonlinear expressions may contain only scalar expressions.

Now, based on @PrzemyslawSzufel's suggestions, I tried this:

function Obj(w, arr_C::Matrix{T}, w_each_sim::Vector{T}) where {T<:Real}

    first_expr = zeros(T, length(w_each_sim))
    for i∈size(w_each_sim, 1), j∈eachindex(w)
        first_expr[i] += w[j]*arr_C[i, j]
    end

    second_expr = map(first_expr) do x
        log(x)
    end

    res = 0
    for idx∈eachindex(w_each_sim)
        res += w_each_sim[idx]*second_expr[idx]
    end

    return res
end

function port_opt(
        n_assets::Int8,
        arr_C::Matrix{Float64},
        w_each_sim::Vector{Float64})

    model = Model()
    @variable(model, 0<= w[1:n_assets] <=1)
    @NLconstraint(model, +(w...) == 1)
    register(model, :Obj, Int64(n_assets), Obj, autodiff=true)
    @NLobjective(model, Max, Obj(w, arr_C, w_each_sim))
    optimize!(model)

    @show value.(w)
    return value.(w)
end

a, b = rand(20, 10), rand(20); b = b./sum(b);
port_opt(Int8(10), a, b)

# threw this
ERROR: Unable to register the function :Obj because it does not support differentiation via ForwardDiff.

Common reasons for this include:

 * the function assumes `Float64` will be passed as input, it must work for any
   generic `Real` type.
 * the function allocates temporary storage using `zeros(3)` or similar. This
   defaults to `Float64`, so use `zeros(T, 3)` instead.

Upvotes: 3

Views: 298

Answers (2)

Przemyslaw Szufel
Przemyslaw Szufel

Reputation: 42214

You need to reformulate the constraint either as:

@constraint(model, sum([w[i] for i in 1:n_assets]) == 1)

or

@NLconstraint(model, +(w...) == 1)

Regarding the objective as pointed out in the comments something is wrong with vector multiplication. What you want to have is most likely (and this works and the model gets solved):

@NLobjective(model, Max,sum(w_each_sim[i]*log(w[i]*arr_C[i]) for i in 1:n_assets))

EDIT:

further on the same question: https://discourse.julialang.org/t/optimization-using-jump/89720

Upvotes: 1

Oscar Dowson
Oscar Dowson

Reputation: 2574

This was asked on the JuMP community forum: https://discourse.julialang.org/t/optimization-using-jump/89720

Cross-posting my solution:

using JuMP, Ipopt
a = rand(20, 10)
b = rand(20); b = b./sum(b)

"""
Calculate weight of each asset through optimization

Parameters
----------
    n_assets::Int8 - number of assets
    arr_C::Matrix{Float64} - array of C
    w_each_sim::Vector{Float64} - weights of each similar TW

Returns
-------
    w_opt::Vector{Float64} - weights of each asset
"""
function port_opt(
    n_assets::Int8,
    arr_C::Matrix{Float64},
    w_each_sim::Vector{Float64},
)
    model = Model(Ipopt.Optimizer)
    @variable(model, 0<= w[1:n_assets] <=1)
    @NLconstraint(model, sum(w[i] for i in 1:n_assets) == 1)
    @NLobjective(
        model,
        Max, 
        sum(
            w_each_sim[i] * log10(sum(w[j] * arr_C[i, j] for j in 1:size(arr_C, 2)))
            for i in 1:length(w_each_sim)
        )
    )
    optimize!(model)
    return value.(w)
end

port_opt(Int8(10), a, b)

Upvotes: 1

Related Questions