edn
edn

Reputation: 2183

Keras 'Tensor' object has no attribute 'ndim'

I am trying to implement a siamese network (by using triplet loss method). I just cannot make it train. After many tries, I guess that I have my problem in the generator (where I prepare the input data stream for training) but I could not localize the problem so far. HELP! :)

Here is my model definition (it is based on ResNet50).

model = ResNet50(weights='imagenet')
model.layers.pop()
for layer in model.layers:
    layer.trainable = False
x = model.get_layer('flatten_1').output
model_out = Dense(128, activation='sigmoid',  name='model_out')(x)
new_model = Model(inputs=model.input, outputs=model_out)

Here I define the model to be trained:

anchor_in = Input(shape=(224, 224, 3))
positive_in = Input(shape=(224, 224, 3))
negative_in = Input(shape=(224, 224, 3))

anchor_out = new_model(anchor_in)
positive_out = new_model(positive_in)
negative_out = new_model(negative_in)

merged_vector = concatenate([anchor_out, positive_out, negative_out], axis=-1)
# Define the model to be trained
siamese_model = Model(inputs=[anchor_in, positive_in, negative_in],
                      outputs=merged_vector)
siamese_model.compile(optimizer=Adam(lr=.001), loss=triplet_loss)

To be able to train the model. I need to feed it with data with a generator and here is how I define it:

(Note that I intentionally put only 1 picture in each folder just to start with.. I will later increase the # pictures in each folder if I can make it work.)

def generator_three_imgs():
    train_path = r'C:\Users\jon\Desktop\AI_anaconda\face_recognition\dataset\train\E'
    generator1 = ImageDataGenerator()
    generator2 = ImageDataGenerator()
    generator3 = ImageDataGenerator()
    anchor_train_batches = generator1.flow_from_directory(train_path+'\Ed_A', target_size=(224, 224), batch_size=1)
    positive_train_batches = generator2.flow_from_directory(train_path+'\Ed_P', target_size=(224, 224), batch_size=1)
    negative_train_batches = generator3.flow_from_directory(train_path+'\Ed_N', target_size=(224, 224), batch_size=1)
    while True:
        anchor_imgs, anchor_labels = anchor_train_batches.next()
        positive_imgs, positive_labels = positive_train_batches.next()
        negative_imgs, negative_labels = negative_train_batches.next()
        concat_out = concatenate([anchor_out, positive_out, negative_out], axis=-1)
        yield ([anchor_imgs, positive_imgs, negative_imgs], 
               concat_out)

And finally, I try totrain the model as following:

siamese_model.fit_generator(generator_three_imgs(),
                            steps_per_epoch=1, epochs=15, verbose=2)

which fails right away by giving the following error message:

Epoch 1/15
Found 1 images belonging to 1 classes.
Found 1 images belonging to 1 classes.
Found 1 images belonging to 1 classes.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-23-7537b4595917> in <module>()
      1 siamese_model.fit_generator(generator_three_imgs(),
----> 2                             steps_per_epoch=1, epochs=15, verbose=2)

~\Anaconda3\envs\tensorflow\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

~\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
   2228                     outs = self.train_on_batch(x, y,
   2229                                                sample_weight=sample_weight,
-> 2230                                                class_weight=class_weight)
   2231 
   2232                     if not isinstance(outs, list):

~\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training.py in train_on_batch(self, x, y, sample_weight, class_weight)
   1875             x, y,
   1876             sample_weight=sample_weight,
-> 1877             class_weight=class_weight)
   1878         if self.uses_learning_phase and not isinstance(K.learning_phase(), int):
   1879             ins = x + y + sample_weights + [1.]

~\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
   1478                                     output_shapes,
   1479                                     check_batch_axis=False,
-> 1480                                     exception_prefix='target')
   1481         sample_weights = _standardize_sample_weights(sample_weight,
   1482                                                      self._feed_output_names)

~\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
     74         data = data.values if data.__class__.__name__ == 'DataFrame' else data
     75         data = [data]
---> 76     data = [np.expand_dims(x, 1) if x is not None and x.ndim == 1 else x for x in data]
     77 
     78     if len(data) != len(names):

~\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\training.py in <listcomp>(.0)
     74         data = data.values if data.__class__.__name__ == 'DataFrame' else data
     75         data = [data]
---> 76     data = [np.expand_dims(x, 1) if x is not None and x.ndim == 1 else x for x in data]
     77 
     78     if len(data) != len(names):

AttributeError: 'Tensor' object has no attribute 'ndim'

There is maybe someone out there who has more experience on this..?


I recognized that I pasted the wrong data above. But that still would not solve the problem. The solution Daniel Möller suggested below solved the problem.

There was a typo in the content of generator function above. The corrected one (incl. Daniel's suggestion below) looks as folowing:

def generator_three_imgs(batch_size=1):
    train_path = r'C:\Users\sinthes\Desktop\AI_anaconda\face_recognition\dataset\train\E'
    generator1 = ImageDataGenerator()
    generator2 = ImageDataGenerator()
    generator3 = ImageDataGenerator()
    anchor_train_batches = generator1.flow_from_directory(train_path+'\Ed_A', target_size=(224, 224), batch_size=batch_size)
    positive_train_batches = generator2.flow_from_directory(train_path+'\Ed_P', target_size=(224, 224), batch_size=batch_size)
    negative_train_batches = generator3.flow_from_directory(train_path+'\Ed_N', target_size=(224, 224), batch_size=batch_size)
    while True:
        anchor_imgs, anchor_labels = anchor_train_batches.next()
        positive_imgs, positive_labels = positive_train_batches.next()
        negative_imgs, negative_labels = negative_train_batches.next()
        concat_out = np.concatenate([anchor_labels, positive_labels, negative_labels], axis=-1)
        yield ([anchor_imgs, positive_imgs, negative_imgs], 
               concat_out)

Upvotes: 4

Views: 11896

Answers (1)

Daniel M&#246;ller
Daniel M&#246;ller

Reputation: 86600

Yes, your generator is using a keras function (for tensors) to concatenate numpy data.

Use numpy.concatenate([anchor_labels, positive_labels, negative_labels], axis=-1).

Upvotes: 5

Related Questions