kcc__
kcc__

Reputation: 1648

Training a Siamese Network with Caffe

I am training a simple siamese network that compares a pair of images. I followed the example given with the caffe(siamese) and made my own model.

My issue is with the Constrastive Loss function. The detail of this function implementation is caffe is defined here. In my implementation I used a margin = 1, defined as follows

layer {
  name: "loss"
  type: "ContrastiveLoss"
  bottom: "data"
  bottom: "data_p"
  bottom: "label"
  top: "loss"
  contrastive_loss_param {
    margin: 1
  }
}

My data are labeled as 0 if dissimliar and 1 if similar. I am confused about the margin of contrastive loss function. How is the margin parameter selected?

Page 3 of the initial paper by Hadsell et.al states margin > 0 but is there any upper bound?

Upvotes: 1

Views: 649

Answers (3)

Raúl Gombru
Raúl Gombru

Reputation: 91

The upper bound of the margin is the maximum distance between samples the loss formulation can get. So it depends on the distance chosen: If its cosine distance, it's 1, if its euclidian distance, it's unbounded. Thie blogpost explaining ranking losses computation explains it https://gombru.github.io/2019/04/03/ranking_loss/

Upvotes: 0

cswah
cswah

Reputation: 431

Margin in Siamese network is considered as hyper parameter. A large value of margin will make the convergence extremely slow.

Upvotes: 0

Qinghao.Hu
Qinghao.Hu

Reputation: 101

In my opinion, it's like a hyper-parameter. A large margin would separate dissimilar data with a large margin but make it difficult to train the network. A small margin would learn a bad network easily. In general, you are supposed to choose different margins for different dataset. For the upper bound, it's determined by the bottom 'data' and 'data_p'. If value range of 'data' and 'data_p' is constrained, such as its absolute value is less than 1, then there is an upper bound.

Upvotes: 0

Related Questions