campy
campy

Reputation: 86

Which loss function calculates the distance between two contours

In my contour generation network I am using the nn.L1Loss() to caculate how many pixels are wrong. This works for training, but the 2D-Distance between the real contour and the fake would be way better. My aim is to measure the length of generated contours afterwards. This code example of two binary images shows where the nn.L1Loss() fails.

import cv2
import torch
from torch import nn

p1 = [(15, 15),(45,45)]
p2 = [(16, 15),(46,45)]

real = cv2.rectangle(np.ones((60,60)), p1[0], p1[1], color=0, thickness=1)
fake = cv2.rectangle(np.ones((60,60)), p2[0], p2[1], color=0, thickness=1)

cv2.imshow('image',np.hstack((real,fake)))
cv2.waitKey(0)

real = torch.tensor(real)
fake = torch.tensor(fake)

losss = [nn.L1Loss(), nn.MSELoss(), nn.BCELoss(), nn.HingeEmbeddingLoss(), nn.SmoothL1Loss()]
print(my_loss(real, fake))

for k, loss in enumerate(losss):
    err = loss(real, fake)
    print(err*60)

If I move the rectangle 1 pixel to the right:

-> L1 loss is 0.0333 * 60 = 2

If I move the rectangle 1 pixel to the right and 1 to the left:

-> L1 loss is 0.0656 * 60 = 3.933

If I move the rectangle 10 pixel to the right and 10 to the left:

-> L1 loss is 0.0656 * 60 = 3.933
still the same! Which is no suprise, the amount of wrong pixels is the same. But the distance to them changed by 10 * 2**1/2.

I also thought about the distance between both centers with this:

    M = cv2.moments(c)
    cX = int(M['m10'] /M['m00'])
    cY = int(M['m01'] /M['m00'])
    centers.append([cX,cY])

The problem here is that the generated contours are not identical to the real and thus have diffrent centers.

This answer is close to what I am looking for, but is computional very expensive?!

https://stackoverflow.com/a/36505073/12337147

Is there a custom loss functions to determine the distance like I described?

Upvotes: 0

Views: 1449

Answers (1)

Bob
Bob

Reputation: 14654

Is this the equation you want

equation

As opposed to the area between the curves that is given by the next equation if the contours are sufficiently similar to each other

L1 area

It means the accumulated squared distances from points in one contour to points to the closest point in the other contour?

pair wise min distance points

Given two arrays with the points on the contours, I can compute this with complexity O(M * N) where C1 has M points and C2 has N points, directly on GPU. Alternatively it could be computed in a O(W * H) where W * H is the dimension of the images.

If this is exactly what you want I may can post a solution.

The solution

First Let's create some example data.

import torch
import math
from torch import nn
import matplotlib.pyplot as plt;

# Number of points in each contour
M, N = 1000, 1500
t1 = torch.linspace(0, 2*math.pi, M).view(1, -1)
t2 = torch.linspace(0, 2*math.pi, N).view(1, -1)

c1 = torch.stack([torch.sin(t1),torch.cos(t1)], dim=2) # (1 x M x 2)
c2 = 1 - 2* torch.sigmoid(torch.stack([torch.sin(t2)*3 + 1, torch.cos(t2)*3 + 2], dim=2)) # (1 x N x 2)

With this we can compute the distance between every pair of points using torch.cdist. Here I used torch.argmin to find the position of the minimum of each column in the array. For computing the loss function what is important is the distance itself, and that can be computed with torch.amin.

distances = torch.cdist(c1, c2); # (1 x M x N)
plt.imshow(distances[0]);
plt.xlabel('index in countor 1');
plt.ylabel('index in countor 2');
plt.plot(torch.argmin(distances[0], axis=0), '.r')

indices of minimum distance point

Now basically however, what you to accumulate is not the distance, is a function of the distance. this could be easily obtained with torch.min(f(distances)), and assuming f(.) is monotonic can be simplified to f(torch.min(distances)).

To approximate the integral we can use the trapezoidal rule, that integrates a linear interpolation of a sampled function, in our case the contour sampled at the points you give.

This gives you a loss function

def contour_divergence(c1, c2, func = lambda x: x**2):
    c1 = torch.atleast_3d(c1);
    c2 = torch.atleast_3d(c2);
    f = func(torch.amin(torch.cdist(c1, c2), dim=2));
    # this computes the length of each segment connecting two consecutive points
    df = torch.sum((c1[:, 1:, :] - c1[:, :-1, :])**2, axis=2)**0.5;
    # here is the trapesoid rule
    return torch.sum((f[:, :-1] + f[:, 1:]) * df[:, :], axis=1) / 4.0;
def contour_dist(c1, c2, func = lambda x: x**2):
    return contour_divergence(c1, c2, func) + contour_divergence(c2, c1, func)

For the case where the line connecting the closest point is always perpendicular to the trajectory contour_dist(c1, c2, lambda x: x) gives the area.

This will give the area of the circle of radius 1 (all the points of the second circle is on the origin).

print(contour_dist(c1, c1*0, lambda x: x) / math.pi) # should print 1

Consider now the distance between a circle of radius 1 and a circle of radius 1 (it will be pi * (1 - 1/4) = 0.75*pi)

print(contour_dist(c1, c1*0.5, lambda x: x)  / math.pi) # should print 0.75

And if you want any loss that accumulates the square distance you just use contour_dist(c1, c2), and you can pass an arbitrary function as parameter to the function. You can backpropagate the loss as long as you can backpropagate the passed func.

Upvotes: 2

Related Questions