Physbox
Physbox

Reputation: 385

Keras error: Expected size[1] in [0, 0], but got 1

I'm trying to build the decoder within a larger seq2seq model in Keras but I keep getting the following error when I run the fit function. The model builds fine otherwise.

InvalidArgumentError: Expected size[1] in [0, 0], but got 1
[[Node: lambda_2/Slice = Slice[Index=DT_INT32, T=DT_FLOAT, 
_device="/job:localhost/replica:0/task:0/device:CPU:0"](lambda_1/Slice, 
metrics/acc/Const, lambda_2/Slice/size)]]

lambda_x/Slice seems to be referring to the lambda function in the loop.

My model has 4 inputs of shape (N, 11), (N, 3), (N, 11), (N, 3) and outputs a softmax distribution of shape (N, 11, 1163).

Below is my code for the decoder which is where the splitter layer is used:

def _decoder_serial_input(self, encoder_states, state_h, state_c):
    """
    Compute one-by-one input to decoder, taking output from previous time-step as input
    :param encoder_states: All the encoder states
    :param state_h: starting hidden state
    :param state_c: starting cell state
    :return: Concatenated output which is shape = (N, Timestep, Input dims)
    """

    all_outputs = []
    states = [state_h, state_c] 
    inputs = self.decoder_inputs  # Shape = N x num_timestep

    repeat = RepeatVector(1, name="decoder_style")
    conc_1 = Concatenate(axis=-1, name="concatenate_decoder")
    conc_att = Concatenate(axis=-1, name="concatenate_attention")

    for t in range(self.max_timestep):

        # This slices the input. -1 is to accept everything in that dimension
        inputs = Lambda(lambda x: K.slice(x, start=[0, t], size=[-1, 1]))(inputs)

        embedding_output = self.embedding_decoder(inputs)
        style_labels = repeat(self.decoder_style_label) 

        concat = conc_1([embedding_output, style_labels])  # Join to style label

        decoder_output_forward, state_h, state_c = self.decoder(concat, initial_state=states)

        if self.attention:
            context, _ = self._one_step_attention(encoder_states, state_h)  # Size of latent dims
            decoder_output_forward = conc_att([context, decoder_output_forward])

        outputs = self.decoder_softmax_output(decoder_output_forward)  # Shape = (N, 1, input dims)

        all_outputs.append(outputs)
        states = [state_h, state_c]

    return Concatenate(axis=1, name="conc_dec_forward")(all_outputs)

Does anyone know why I am getting this error? Thanks.

Upvotes: 1

Views: 2528

Answers (1)

Physbox
Physbox

Reputation: 385

I fixed the issue. The problem was that I was setting the output of the Lambda layer to the inputs variable which is wrong. This changed the shape of the input tensor to the lambda layer. On the first iteration it was (N, 11), as desired, but on subsequent iterations of the loop it became (N, 1), which resulted in an error.

Upvotes: 1

Related Questions