abeagomez
abeagomez

Reputation: 602

Keras merge VS concatenate, can't update my code

I have a Keras functional model for a CNN. I'm trying to implement a triplet-loss function. I found some posts about who to do that using "merge", which is now deprecated, but I'm not able to use "concatenate" as I was using merge.

The original code looks like this:

def triplet_loss(x):
    anchor, positive, negative = x
    pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), 1)
    neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), 1)

    basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), 0.05)
    loss = tf.reduce_mean(tf.maximum(basic_loss, 0.0), 0)
    return loss



def build_model(img_x, img_y):
    input_shape = Input(shape=(img_x, img_y, 3))
    c0 = Conv2D(32, kernel_size=(3, 3), strides=(1, 1), activation='relu') (input_shape)
    m0 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2)) (c0)
    f = Flatten()(m0)
    d1 = Dense(4024, activation='relu')(f)
    d2 = Dense(512, activation='sigmoid')(d1)

    anchor = Input(shape=(128, 254, 3))
    positive = Input(shape=(128, 254, 3))
    negative = Input(shape=(128, 254, 3))

    reid_model = Model(inputs=[input_shape], outputs=[d2])

    anchor_embed = reid_model(anchor)
    positive_embed = reid_model(positive)
    negative_embed = reid_model(negative)

    loss = merge([anchor_embed, positive_embed, negative_embed],
             mode=triplet_loss, output_shape=(1,))

    model = Model(inputs=[anchor, positive, negative], outputs=loss)
    model.compile(optimizer='Adam', loss='mean_absolute_error')
    return model

I was using loss = merge([anchor_embed, positive_embed, negative_embed], mode=triplet_loss, output_shape=(1,)) as a way to transform the output of the function triplet_loss into a keras layer output (as suggested in https://codepad.co/snippet/F1uVDD5N). The function concatenate doesn't have a parameter "mode". HIs there any way to adapt my code to get the result of the loss function as a Keras layer output?

Upvotes: 1

Views: 815

Answers (1)

abeagomez
abeagomez

Reputation: 602

I finally found a way to compute the value of the triplet_loss function keeping the original architecture of my code by adding a lambda layer.

def triplet_loss(x):
    anchor, positive, negative = x
    pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), 1)
    neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), 1)

    basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), 0.05)
    loss = tf.reduce_mean(tf.maximum(basic_loss, 0.0), 0)
    return loss

def build_model(img_x, img_y):
    input_shape = Input(shape=(img_x, img_y, 3))
    c0 = Conv2D(32, kernel_size=(3, 3), strides=(1, 1), activation='relu') 
(input_shape)
    m0 = MaxPooling2D(pool_size=(2, 2), strides=(2, 2)) (c0)
    f = Flatten()(m0)
    d1 = Dense(4024, activation='relu')(f)
    d2 = Dense(512, activation='sigmoid')(d1)

    anchor = Input(shape=(128, 254, 3))
    positive = Input(shape=(128, 254, 3))
    negative = Input(shape=(128, 254, 3))

    reid_model = Model(inputs=[input_shape], outputs=[d2])

    anchor_embed = reid_model(anchor)
    positive_embed = reid_model(positive)
    negative_embed = reid_model(negative)

    merged_output = concatenate([anchor_embed, positive_embed, 
negative_embed])
    loss = Lambda(triplet_loss, (1,))(merged_output)

    model = Model(inputs=[anchor, positive, negative], outputs=loss)
    model.compile(optimizer='Adam', loss='mse',
                  metrics=["mae"])
    return model

Upvotes: 2

Related Questions