SDG
SDG

Reputation: 2342

TypeError: 'NoneType' object is not callable Tensorflow

Currently working on a regression problem with tf2.0. In order to prepare my dataset, I have used the following code:

train = tf.data.Dataset.from_tensor_slices(([train_X], [train_y])).batch(BATCH_SIZE).repeat()
val = tf.data.Dataset.from_tensor_slices(([val_X], [val_y])).batch(BATCH_SIZE).repeat() 

Now if we look at their shapes:

<RepeatDataset shapes: ((None, 42315, 20), (None, 42315)), types: (tf.float64, tf.float64)>
<RepeatDataset shapes: ((None, 2228, 20), (None, 2228)), types: (tf.float64, tf.float64)>

Which I believe are fairly right. Now if I run these through the model as shown below, they seem to train and work just fine:

simple_lstm_model = tf.keras.models.Sequential([
    tf.keras.layers.LSTM(8),
    tf.keras.layers.Dense(1)
])

simple_lstm_model.compile(optimizer='adam', loss='mae')

history = simple_lstm_model.fit(train, epochs=EPOCHS,
                      steps_per_epoch=EVALUATION_INTERVAL,
                      validation_data=val, validation_steps=50)

However, when I make my model slightly more complicated and try to compile it, it gives me the error which is the heading of this question. Details about the error are at the very bottom of this question. The complicated model is shown below:

comp_lstm = tf.keras.models.Sequential([
    tf.keras.layers.LSTM(64),
    tf.keras.layers.LSTM(64),
    tf.keras.layers.LSTM(64),
    tf.keras.layers.Dense(1)
])

comp_lstm.compile(optimizer='adam', loss='mae')

history = comp_lstm.fit(train, 
                      epochs=EPOCHS,
                      steps_per_epoch=EVALUATION_INTERVAL,
                      validation_data=val, validation_steps=50)

I, infact, wanted to try a bidirectional LSTM but it seems like a multiple stack of LSTMs itself is giving me the issues as stated below.


The Error

TypeError                                 Traceback (most recent call last)
<ipython-input-21-8a86aab8a730> in <module>
      2 EPOCHS = 20
      3 
----> 4 history = comp_lstm.fit(train, 
      5                       epochs=EPOCHS,
      6                       steps_per_epoch=EVALUATION_INTERVAL,

~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
     64   def _method_wrapper(self, *args, **kwargs):
     65     if not self._in_multi_worker_mode():  # pylint: disable=protected-access
---> 66       return method(self, *args, **kwargs)
     67 
     68     # Running inside `run_distribute_coordinator` already.

~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
    846                 batch_size=batch_size):
    847               callbacks.on_train_batch_begin(step)
--> 848               tmp_logs = train_function(iterator)
    849               # Catch OutOfRangeError for Datasets of unknown size.
    850               # This blocks until the batch has finished executing.

~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    578         xla_context.Exit()
    579     else:
--> 580       result = self._call(*args, **kwds)
    581 
    582     if tracing_count == self._get_tracing_count():

~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    609       # In this case we have created variables on the first call, so we run the
    610       # defunned version which is guaranteed to never create variables.
--> 611       return self._stateless_fn(*args, **kwds)  # pylint: disable=not-callable
    612     elif self._stateful_fn is not None:
    613       # Release the lock early so that multiple threads can perform the call

TypeError: 'NoneType' object is not callable

Upvotes: 7

Views: 14161

Answers (1)

user11530462
user11530462

Reputation:

Problem is that when you stack multiple LSTMs, we should use the argument, return_sequences = True in LSTM Layer.

It is because if return_sequences = False (default behavior), LSTM will return the Output of the Last Time Step. But when we stack LSTMs, we will need the Output of the Complete Sequence rather than just the Last Time Step.

Changing your Model to

comp_lstm = tf.keras.models.Sequential([
    tf.keras.layers.LSTM(64, return_sequences = True),
    tf.keras.layers.LSTM(64, return_sequences = True),
    tf.keras.layers.LSTM(64),
    tf.keras.layers.Dense(1)
])

should resolve the error.

This way, you can use Bi-Directional LSTMs as well.

Please let me know if you face any other error and I will be Happy to help you.

Hope this helps. Happy Learning!

Upvotes: 5

Related Questions