zezo
zezo

Reputation: 455

Bound optimization using pytorch

How to include bounds when using optimization method in pytorch. I have a tensor of variables, each variable has different bound.

upper_bound = torch.tensor([1,5,10], requires_grad=False)
lower_bound = torch.tensor([-1,-5,-10], requires_grad=False)
X           = torch.tensor([10, -60, 105], require_grad=True)
for _ in range(100):
    optimizer.zero_grad()
    loss = ..    
    loss.backward()
    optimizer.step()
    X[:] = X.clamp(lower_bound, upper_bound)

But, clamp only uses a single number. Since each variable is bounded differently, I need to include the upper and lower bounds tensors.

Upvotes: 0

Views: 1419

Answers (1)

trialNerror
trialNerror

Reputation: 3563

Gradient descent is not the best method to achieve constrained optimization, but here you can enforce your constraints with :

x = ((X-lower_bound).clamp(min=0)+lower_bound-upper_bound).clamp(max=0)+upper_bound

Requires two clamp instead of one but I could not find any native way to achieve this.

Upvotes: 1

Related Questions