Josh
Josh

Reputation: 3321

Printing Tensor elements (v1.14)

I am trying to understand how keras/tensorflow work.

In this example I'm working with, an LSTM network with a defined loss function. I want to print the values in y_pred and loss variables in this example, however a standard print() function will not print the actual numeric values.

When I try print() function, I get the following output: Tensor("loss_13/dense_14_loss/strided_slice:0", shape=(), dtype=float32)

import tensorflow as tf
from tensorflow.keras import Sequential, backend as K
from tensorflow.keras.layers import Dense, LSTM, Dropout
from tensorflow.keras.losses import categorical_crossentropy

regressor = Sequential()
regressor.add(LSTM(units = 10, dropout=0.10, return_sequences = True, input_shape = (X.shape[1], X.shape[2])))
regressor.add(Dense(units = 4, activation='softmax'))
regressor.compile(optimizer = optimizer, loss = weight_fx(np.array([0.005,0.20,0.79,0.005])), metrics = ['categorical_accuracy'])

def weight_fx(weights):
    weights = K.variable(weights)     
    def loss(y_true, y_pred):
        y_pred /= K.sum(y_pred, axis=-1, keepdims=True)
        print(y_pred)
        loss = y_true * K.log(y_pred) * weights
        return loss
    
    return loss

Upvotes: 1

Views: 256

Answers (2)

tornikeo
tornikeo

Reputation: 938

Try doing it like this:

import tensorflow as tf
from tensorflow.keras import Sequential, backend as K
from tensorflow.keras.layers import Dense, LSTM, Dropout
from tensorflow.keras.losses import categorical_crossentropy
import numpy as np

X = tf.ones((10,10,10))
y = tf.ones((10,1))
def weight_fx(weights):
    weights = K.variable(weights)     
    def loss(y_true, y_pred):
        y_pred /= K.sum(y_pred, axis=-1, keepdims=True)
        tf.print(y_pred)
        loss = y_true * K.log(y_pred) * weights
        return loss
    
    return loss

regressor = Sequential()
regressor.add(LSTM(units = 10, dropout=0.10, return_sequences = True, 
                   input_shape = (X.shape[1], X.shape[2])))
regressor.add(Dense(units = 4, activation='softmax'))
regressor.compile(optimizer = 'adam', loss = weight_fx(np.array([0.005,0.20,0.79,0.005])), metrics = ['categorical_accuracy'])
regressor.fit(X,y)

  • Q: Why do you see Tensor("loss_13/dense_14_loss/strided_slice:0", shape=(), dtype=float32)?
  • A: Tensorflow expects that the loss function is going to get called very often, so it is paramount to optimize it as much as possible. Tensorflow has a way of doing this, called 'tracing'. This basically means passing in a 'detector' variable that 'experiences' all operations in the function and remembers them. Then, based on these experiences, Tensorflow builds a separate so-called 'graph' function that is way faster and lacks ability to call many common functions with side-effect in python. Like print(). What you see there is a detector or a 'tracer'. It only runs once.
  • Then how do I debug?
  • Several ways for doing this. If you want to print-debug, use tf.print. In my experience this sometimes works, and sometimes doesn't. In case when it doesn't, and you still see the detector variable only, use the model.run_eagerly = True or pass it as an argument in model.compile. Even if you do not use tf.print and set run_eagerly, python's built-in print will still work (Try this). Last but not least, you can wrap all your side-effect functions in a tf.py_function. This requires a bit more code and a sample copy-and-paste code can be seen here.

Also, Make sure to first define the function and then use it in the model.compile, especially if you are using Jupyter notebook. A buggy old declaration might still persist in memory and will probably ruin your day.

Did this help?

Upvotes: 3

TheExplodingGradient
TheExplodingGradient

Reputation: 390

I haven't tried this yet but you should always use:

tf.print(value)

Instead of just normal

print(value)

Tensorflow has implemented this function specifically for this. Hopefully, this helps!

Upvotes: 1

Related Questions