junfanbl
junfanbl

Reputation: 461

Input 0 of layer "model" is incompatible with the layer: expected shape=(None, n, n, n)

I'm not quite sure what my problem is here. When trying to run the model below, I get an error saying:

ValueError: Input 0 of layer "model" is incompatible with the layer: expected shape=(None, 250, 319, 3), found shape=(250, 319, 3)

The size of the image being returned from state() is (250, 319, 3). Which I have clearly defined in the get_actor() models input below. Some things I have tried:

One thing I have read, is that Keras may be expecting a batch of images, and not just a single one, so you may need to expand the dimension to (1, 250, 319, 3). I tried doing that using tf.expand_dims and I simply got another error saying it was expecting (None, 1, 250, 319, 3).

Another thing I tried was simply changing the input size to (None, 250, 319, 3) and it seemed to work until it produced another error in associaation with layer3 = layers.Flatten()(layer2) saying:

ValueError: The last dimension of the inputs to a Dense layer should be defined. Found None. Full input shape received: (None, None)

That could be a totally separate issue. Any suggestion on the best way to proceed. I'm not sure why it insists on including None in the input shape?

def state():
    # can set to N number of captures per step
    image = tf.convert_to_tensor(screen_capture())
    img = tf.image.resize(image, [round(image.shape[0] / 2), round(image.shape[1] / 2)], preserve_aspect_ratio=True)
    img_norm = tf.keras.utils.normalize(img, axis=0)
    return img_norm

def get_actor():
    # Initialize weights between -3e-3 and 3-e3
    last_init = tf.random_uniform_initializer(minval=-0.003, maxval=0.003)

    # Convolutions
    inputs = layers.Input(shape=(250, 319, 3))
    layer1 = layers.Conv2D(32, 3, strides=2, activation="relu")(inputs)
    layer2 = layers.Conv2D(64, 3, strides=4, activation="relu")(layer1)

    layer3 = layers.Flatten()(layer2)

    # Fully connected layers
    layer4 = layers.Dense(256, activation="relu")(layer3)
    layer5 = layers.Dense(256, activation="relu")(layer4)
    action = layers.Dense(3, activation="tanh", kernel_initializer=last_init)(layer5)

    outputs = action
    model = tf.keras.Model(inputs, outputs)
    return model

actor_model = get_actor()
sampled_actions = tf.squeeze(actor_model(state()))

Upvotes: 1

Views: 2213

Answers (1)

AloneTogether
AloneTogether

Reputation: 26708

Running your code with tf.expand_dims does not seem to have any issues, so the error must be somewhere else:

import tensorflow as tf

def state():  
    return tf.expand_dims(tf.keras.utils.normalize(tf.random.normal((250, 319, 3))), axis=0)

def get_actor():
    # Initialize weights between -3e-3 and 3-e3
    last_init = tf.random_uniform_initializer(minval=-0.003, maxval=0.003)

    # Convolutions
    inputs = tf.keras.layers.Input(shape=(250, 319, 3))
    layer1 = tf.keras.layers.Conv2D(32, 3, strides=2, activation="relu")(inputs)
    layer2 = tf.keras.layers.Conv2D(64, 3, strides=4, activation="relu")(layer1)

    layer3 = tf.keras.layers.Flatten()(layer2)

    # Fully connected layers
    layer4 = tf.keras.layers.Dense(256, activation="relu")(layer3)
    layer5 = tf.keras.layers.Dense(256, activation="relu")(layer4)
    action = tf.keras.layers.Dense(3, activation="tanh", kernel_initializer=last_init)(layer5)

    outputs = action
    model = tf.keras.Model(inputs, outputs)
    return model

actor_model = get_actor()
sampled_actions = tf.squeeze(actor_model(state()))
print(sampled_actions)
tf.Tensor([ 0.00166097  0.00444322 -0.00574574], shape=(3,), dtype=float32)

Upvotes: 1

Related Questions