Reputation: 107
I'm trying to make the following lines of code working:
low_encoder_out = TimeDistributed( AutoregressiveDecoder(...) )([X_tf, embeddings])
Where AutoregressiveDecoder
is a custom layer that takes two inputs.
After a bit of googling, the problem seems to be that the TimeDistributed
wrapper doesn't accept multiple inputs. There are solutions that proposes to merge the two inputs before feeding it to the layer, but since their shape is
X_tf.shape: (?, 16, 16, 128, 5)
embeddings.shape: (?, 16, 1024)
I really don't know how to merge them. Is there a way of having the TimeDistributed
layer to work with more than one input? Or, alternatively, is there any way to merge the two inputs in a nice way?
Upvotes: 3
Views: 1772
Reputation: 33410
As you mentioned TimeDistributed
layer does not support multiple inputs. One (not-very-nice) workaround, considering the fact that the number of timesteps (i.e. second axis) must be the same for all the inputs, is to reshape all of them to (None, n_timsteps, n_featsN)
, concatenate them and then feed them as input of TimeDistributed
layer:
X_tf_r = Reshape((n_timesteps, -1))(X_tf)
embeddings_r = Reshape((n_timesteps, -1))(embeddings)
concat = concatenate([X_tf_r, embeddings_r])
low_encoder_out = TimeDistributed(AutoregressiveDecoder(...))(concat)
Of course, you might need to modify the definition of your custom layer and separate the inputs back if necessary.
Upvotes: 3