NicolaiF
NicolaiF

Reputation: 1343

Defining a cost/loss function in TensorFlow

I am working on a graph network problem where I would like to leverage the power of TensorFlow.

I have troubles implementing the cost function in TensorFlow correctly though.

my cost function is given as:

sum_i>j A_ij*log(pi_ij)+(1-A_ij)*log(1-pi_ij)

where: pi_ij = sigmoid(-|z_i-z_j|+beta)

|| is the euclidian distance, pi_ij denotes the chance for a link between i and j, and A_ij = 1 if link and 0 if not (in a simple adjencency matrix), both are matrices of same size. I have solved this optimization problem manually using python and a simple SGD method. I calculate the cost function as following:

import tensorflow as tf
import numpy as np
import scipy.sparse.csgraph as csg
from scipy.spatial import distance

Y = np.array([[0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 1., 1., 0., 1., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0.],
   [0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 1., 0.],
   [0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 1., 0., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1.],
   [0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
   [1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 1.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.],
   [0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 1.],
   [0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0.],
   [0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 1., 0., 1., 0., 0., 1., 0., 0., 0.]])

# removing all non linked entries
temp = Y[~np.all(Y == 0, axis=1)]
temp = temp[:,~np.all(Y == 0, axis=1)]
Y = temp

n = np.shape(Y)[0]
k = 2

# finding shortest path and cmdscaling
D = csg.shortest_path(Y, directed=True)
Z = cmdscale(D)[0][:,0:k]
Z = Z - Z.mean(axis=0, keepdims=True)

# calculating cost
euclideanZ = distance.cdist(Z, Z, 'euclidean')
sigmoid = lambda x: 1 / (1 + np.exp(-x))
vectorSigmoid = np.vectorize(sigmoid)
pi = vectorSigmoid(euclideanZ)

cost = np.sum(Y*np.log(pi)+(1-Y)*np.log(1-pi))

How could I define such a loss function in TensorFlow? Is it even possible? Any help or nudge in the right direction would be greatly appreciated.

EDIT

I got this down in tensor flow:

tfY = tf.placeholder(shape=(15, 15), dtype=tf.float32)

with tf.variable_scope('test'):
    shape = [] # Shape [] means that we're using a scalar variable
    B = tf.Variable(tf.zeros(shape))
    tfZ = tf.Variable(tf.zeros(shape=(15,2)))

def loss():
    r = tf.reduce_sum(tfZ*tfZ, 1)
    r = tf.reshape(r, [-1, 1])
    D = tf.sqrt(r - 2*tf.matmul(tfZ, tf.transpose(tfZ)) + tf.transpose(r))
    return tf.reduce_sum(tfY*tf.log(tf.sigmoid(D+B))+(1-tfY)*tf.log(1-tf.sigmoid(D+B)))

LOSS = loss()
GRADIENT = tf.gradients(LOSS, [B, tfZ])

sess = tf.Session()
sess.run(tf.global_variables_initializer())

tot_loss = sess.run(LOSS, feed_dict={tfZ: Z,
                                     tfY: Y})

print(tot_loss)

loss_grad = sess.run(GRADIENT, feed_dict={tfZ: Z,
                                     tfY: Y})

print(loss_grad)

which prints the following:

-487.9079
[-152.56271, array([[nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan],
       [nan, nan]], dtype=float32)]

My beta returns a value, and adding it multiplied with the learning rate improves the score, but my tfZ vector is only returning nans, I am obviously doing something wrong, if anyone can spot what I am doing wrong, I would be grateful.

Upvotes: 1

Views: 1219

Answers (1)

LI Xuhong
LI Xuhong

Reputation: 2346

Just change this:

D = tf.sqrt(r - 2*tf.matmul(tfZ, tf.transpose(tfZ)) + tf.transpose(r) + 1e-8)  # adding a small constant.

Because the distances have zeros in the diagonal and the gradient of sqrt can not be computed when the value being zero.

Upvotes: 1

Related Questions