Reputation: 73
I've been using Keras and Tensorflow 1.x for a while, but I am trying to learn and update to tensorflow 2.1 (especially tf.data.Dataset
s). I can successfully create a tfrecords file and load it using a tf.data.TFRecordDataset
. After parsing the items and some other preprocssing (e.g., normalization) the dataset returns a tuple whose first item is a dictionary containing various types of input (x
) tensors and whose second item is a single value/tensor for the target (y
) values.
I can train the model by manually iterating over the dataset in batches using tf.GradientTape()
. However, when I try to train the model using model.fit
the following exception is raised inside the call
method of my model the first time I use the inputs.
ValueError: Attempt to convert a value (TensorSpec(shape=(16, 278, 136), dtype=tf.float32, name=None)) with an unsupported type (<class 'tensorflow.python.framework.tensor_spec.TensorSpec'>) to a Tensor.
If I print out the inputs inside the call
method before trying to use them it shows the type as a TensorSpec and not a Tensor (I have several placeholders in my input dictionary for the time being):
{
'contextual': {'one_hot': {}, 'multi_hot': {}, 'dense': {}},
'sequential': {
'one_hot': {},
'multi_hot': {},
'dense': TensorSpec(shape=(16, 278, 136), dtype=tf.float32, name=None)
}
}
Although, I don't know enough about the internals of Tensorflow to understand if this is expected or not.
Any ideas about why this might be happening or ways to fix it would be greatly appreciated.
Upvotes: 0
Views: 1816
Reputation: 73
The problem appears to be that Keras cannot handle nested dictionaries. If you flatten the dictionary then it resolves the error. For example:
{
'contextual_one_hot': ...,
'contextual_multi_hot': ...,
'contextual_dense': ...,
'sequential_one_hot': ...,
'sequential_multi_hot': ...,
'sequential_one_hot': ...,
'sequential_dense': ...
}
Upvotes: 1