Xiashawn
Xiashawn

Reputation: 21

How can I implement pairwise loss function by tensorflow?

I am implementing a customized pairwise loss function by tensorflow. For a simple example, the training data has 5 instances and its label is

y=[0,1,0,0,0]

Assume the prediction is

y'=[y0',y1',y2',y3',y4']

In this case, a simple loss function may be

min f=(y0'-y1')+(y2'-y1')+(y3'-y1')+(y4'-y1')

Since y[1]=1. I just want to make sure the prediction y0',y2',y3',y4' as "far" as y1'.

However, I have no idea how to implement it in tensorflow. In my current implementation, I use mini batch and set training label as a placeholder like: y = tf.placeholder("float", [None, 1]). In this case, I can't construct the loss function because I don't know the size of training data and which instance has label "1" or "0" due to "None".

Can anyone suggest how to do it in tensorflow? Thanks!

Upvotes: 2

Views: 3825

Answers (2)

Paula Zhou
Paula Zhou

Reputation: 21

You could preprocess your data outside the model.

For example:

First seperate the positive and negative instances into 2 groups of inputs:

# data.py

import random

def load_data(data_x, data_y):
    """
    data_x: list of all instances
    data_y: list of their labels
    """
    pos_x = []
    neg_x = []
    for x, y in zip(data_x, data_y):
        if y == 1:
            pos_x.append(x)
        else:
            neg_x.append(x)

    ret_pos_x = []
    ret_neg_x = []

    # randomly sample k negative instances for each positive one
    for x0 in pos_x:
        for x1 in random.sample(neg_x, k):
            ret_pos_x.append(x0)
            ret_neg_x.append(x1)

    return ret_pos_x, ret_neg_x

Next, in your model, define 2 placeholders, instead of 1:

# model.py

import tensorflow as tf

class Model:
    def __init__(self):
        # shape: [batch_size, dim_x] (assume x are vectors of dim_x)
        self.pos_x = tf.placeholder(tf.float32, [None, dim_x])  
        self.neg_x = tf.placeholder(tf.float32, [None, dim_x])

        # shape: [batch_size]
        # NOTE: variables in some_func should be reused
        self.pos_y = some_func(self.pos_x)
        self.neg_y = some_func(self.neg_x)

        # A more generalized form: loss = max(0, margin - y+ + y-)
        self.loss = tf.reduce_mean(tf.maximum(0.0, 1.0 - self.pos_y + self.neg_y))
        self.train_op = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)

And finally iterate through your data to feed the model:

# main.py

import tensorflow as tf 

from model import Model
from data import load_data

data_x, data_y = ...  # read from your file
pos_x, neg_x = load_data(data_x, data_y)

model = Model()
with tf.Session() as sess:
    # TODO: randomize the order
    for beg in range(0, len(pos_x), batch_size):
        end = min(beg + batch_size, len(pos_x))

        feed_dict = {
            model.pos_x: pos_x[beg:end],
            model.neg_x: neg_x[beg:end]
        }
        _, loss = sess.run([model.train_op, model.loss], feed_dict)
        print "%s/%s, loss = %s" % (beg, len(pos_x), loss)

Upvotes: 2

dxf
dxf

Reputation: 573

Suppose we have labels, like this, y=[0,1,0,0,0],

transform it to Y=[-1,1,-1,-1,-1].

The prediction is y'=[y0',y1',y2',y3',y4'],

So, objective is min f = -mean(Y*y')

Notice that the above formula is the equivalent of your statement.

Upvotes: 0

Related Questions