Simplicity
Simplicity

Reputation: 48986

Input 0 is incompatible with layer flatten_2: expected min_ndim=3, found ndim=2

I have Keras model shown below where I'm trying to merge an image input with a feature vector of numerical values, but having the following error:

ValueError: Input 0 is incompatible with layer flatten_2: expected min_ndim=3, found ndim=2

which occurs on the following statement:

value_model.add(Flatten(input_shape=(12,)))

Any ideas on how I can solve the issue?

image_input = Input((512, 512, 1))
vector_input = Input((12,))

image_model = Sequential()
image_model.add(Convolution2D(32,8,8, subsample=(4,4), input_shape=(512,512,1)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64,4,4, subsample=(2,2)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64,3,3, subsample=(1,1)))
image_model.add(Activation('relu'))
image_model.add(Flatten())
image_model.add(Dense(512))
image_model.add(Activation('relu'))

value_model = Sequential()
value_model.add(Flatten(input_shape=(12,)))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))

merged = Concatenate([image_model, value_model])

final_model = Sequential()
final_model.add(merged)
final_model.add(Dense(1, activation='sigmoid'))

model = Model(inputs=[image_input, vector_input], outputs=output)
model.compile(loss='binary_crossentropy', optimizer='adam',metrics=['acc'])
model.fit([images, features], y, epochs=5)

EDIT-1

This is the full script:

from keras.layers import Input, Concatenate, Conv2D, Flatten, Dense, Convolution2D, Activation
from keras.models import Model, Sequential
import pandas as pd
import numpy as np
import cv2
import os

def label_img(img):
    word_label = img.split('.')[-3]
    if word_label == 'r':
        return 1
    elif word_label == 'i':
        return 0

train_directory = '/train'
images = []
y = []

dataset = pd.read_csv('results.csv')

dataset = dataset[[ 'first_value',
                    'second_value']]

features = dataset.iloc[:,0:12].values

for root, dirs, files in os.walk(train_directory):
    for file in files:
        image = cv2.imread(root + '/' + file)
        image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
        image = cv2.resize(image,(512,512),interpolation=cv2.INTER_AREA)
        image = image/255
        images.append(image)
        label = label_img(file)
        y.append(label)

images = np.asarray(images)
images = images.reshape((-1,512,512,1))

image_input = Input((512, 512, 1))
vector_input = Input((12,))

image_model = Sequential()
image_model.add(Convolution2D(32,8,8, subsample=(4,4), input_shape=(512,512,1)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64,4,4, subsample=(2,2)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64,3,3, subsample=(1,1)))
image_model.add(Activation('relu'))
image_model.add(Flatten())
image_model.add(Dense(512))
image_model.add(Activation('relu'))

value_model = Sequential()
#value_model.add(Flatten(input_shape=(12,)))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))

merged = Concatenate([image_model, value_model])

final_model = Sequential()
final_model.add(merged)
final_model.add(Dense(1, activation='sigmoid'))

model = Model(inputs=[image_input, vector_input], outputs=output)
model.compile(loss='binary_crossentropy', optimizer='adam',metrics=['acc'])
model.fit([images, features], y, epochs=5)

EDIT-2

When I did the following:

output = final_model.add(Dense(1, activation='sigmoid'))

I still received the same error.

Upvotes: 4

Views: 10232

Answers (1)

Sreeram TP
Sreeram TP

Reputation: 11937

You can change your code to reflect new Keras 2 API like shown below. In your code you are trying a mixed approach of older keras API and Keras 2 API.

I also recommend to use new Conv2D layer instead of Convolution2D layer along with Keras 2 API. The subsample argument is now called strides in Conv2D

image_input = Input((512, 512, 1))
vector_input = Input((12,))

image_model = Conv2D(32,(8,8), strides=(4,4))(image_input)
image_model = Activation('relu')(image_model)
image_model = Conv2D(64,(4,4), strides=(2,2))(image_model)
image_model = Activation('relu')(image_model)
image_model = Conv2D(64,(3,3), strides=(1,1))(image_model)
image_model = Activation('relu')(image_model)
image_model = Flatten()(image_model)
image_model = Dense(512)(image_model)
image_model = Activation('relu')(image_model)

value_model = Dense(16)(vector_input)
value_model = Activation('relu')(value_model)
value_model = Dense(16)(value_model)
value_model = Activation('relu')(value_model)
value_model = Dense(16)(value_model)
value_model = Activation('relu')(value_model)

merged = concatenate([image_model, value_model])

output = Dense(1, activation='sigmoid')(merged)

model = Model(inputs=[image_input, vector_input], outputs=output)

model.compile(loss='binary_crossentropy', optimizer='adam')

Consider a toy dataset,

I = np.random.rand(100, 512, 512, 1)
V = np.random.rand(100, 12, )

y = np.random.rand(100, 1, )

Training,

model.fit([I, V], y, epochs=10, verbose=1)


Epoch 1/10
100/100 [==============================] - 9s 85ms/step - loss: 3.4615
Epoch 2/10
 32/100 [========>.....................] - ETA: 4s - loss: 0.9696

Upvotes: 5

Related Questions