Clark Kent
Clark Kent

Reputation: 1176

Keras Flatten Conv3D ValueError The input shape to Flatten is not fully defined

I'm trying to build a variable length sequence classification model using Keras with Tensorflow backend based off of Marcin's PS3 example here: https://stackoverflow.com/a/42635571/1203882

I'm getting an error:

ValueError: The shape of the input to "Flatten" is not fully defined (got (None, 1, 1, 32). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.

I tried putting an input shape on the Inception layer, but the error persists. How do I correct this?

To reproduce:

import numpy as np
import keras
from keras.utils import to_categorical
from keras.layers import TimeDistributed, Conv3D, Input, Flatten, Dense
from keras.applications.inception_v3 import InceptionV3
from random import randint
from keras.models import Model

HEIGHT = 224
WIDTH = 224
NDIMS = 3
NUM_CLASSES = 4

def input_generator():
    while True:
        nframes = randint(1,5)
        label = randint(0,NUM_CLASSES-1)
        x = np.random.random((nframes, HEIGHT, WIDTH, NDIMS))
        x = np.expand_dims(x, axis=0)
        y = keras.utils.to_categorical(label, num_classes=NUM_CLASSES)
        yield (x, y)

def make_model():
    layers = 32
    inp = Input(shape=(None, HEIGHT, WIDTH, NDIMS))
    cnn = InceptionV3(include_top=False, weights='imagenet')
    # cnn = InceptionV3(include_top=False, weights='imagenet', input_shape=(HEIGHT, WIDTH, NDIMS)) # same result
    td = TimeDistributed(cnn)(inp)
    c3da = Conv3D(layers, 3,3,3)(td)
    c3db = Conv3D(layers, 3,3,3)(c3da)
    flat = Flatten()(c3db)
    out = Dense(NUM_CLASSES, activation="softmax")(flat)
    model = Model(input=(None, HEIGHT, WIDTH, NDIMS), output=out)
    model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    return model

if __name__ == '__main__':
    model = make_model()
    model.fit_generator(input_generator(), samples_per_epoch=5, nb_epoch=2, verbose=1)

Upvotes: 3

Views: 1390

Answers (1)

rvinas
rvinas

Reputation: 11895

It is not possible to flatten a variable-length tensor. If that was possible, how would Keras know the number of input units to your last fully connected layer? The number of parameters of a model needs to be defined at graph creation time.

There are two possible solutions to your problem:

a) Fix the number of frames:

inp = Input(shape=(NFRAMES, HEIGHT, WIDTH, NDIMS))

b) Aggregate the frames' dimension prior to the flatten layer. For example:

from keras.layers import Lambda
import keras.backend as K    

def make_model():
    layers = 32
    inp = Input(shape=(None, HEIGHT, WIDTH, NDIMS))
    cnn = InceptionV3(include_top=False, weights='imagenet')
    # cnn = InceptionV3(include_top=False, weights='imagenet', input_shape=(HEIGHT, WIDTH, NDIMS)) # same result
    td = TimeDistributed(cnn)(inp)
    c3da = Conv3D(layers, 3,3,3)(td)
    c3db = Conv3D(layers, 3,3,3)(c3da)
    aggregated = Lambda(lambda x: K.sum(x, axis=1))(c3db)
    flat = Flatten()(aggregated)
    out = Dense(NUM_CLASSES, activation="softmax")(flat)
    model = Model(input=inp, output=out)
    model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
    return model

NOTE 1: There might be better strategies to aggregate the frames' dimension.

NOTE 2: The input of keras.utils.to_categorical should be a list of labels:

y = keras.utils.to_categorical([label], num_classes=NUM_CLASSES)

Upvotes: 1

Related Questions