Mornor
Mornor

Reputation: 3803

Training of Conv2D model stuck [MNIST dataset]

As a part of a bigger project, I am writing a small Convolution 2D model to train a Neural Networlk on the MNIST dataset.

My (classic) workflow is as follow:

  1. Load the dataset and convert it to np array
  2. Split the dataset into training and validation set
  3. Reshape (X_train.reshape(X.shape[0], 28, 28, 1)) and one_hot_encode (keras.utils.to_categorical(y_train, 10))
  4. Get the model
  5. Train it based on the data, and save it

My train function is defined as follow:

def train(model, X_train, y_train, X_val, y_val):
    model.fit_generator(
        generator=get_next_batch(X_train, y_train),
        steps_per_epoch=200,
        epochs=EPOCHS,
        validation_data=get_next_batch(X_val, y_val),
        validation_steps=len(X_val)
    )

    return model

And the generator I use:

def get_next_batch(X, y):
    # Will contains images and labels
    X_batch = np.zeros((BATCH_SIZE, 28, 28, 1))
    y_batch = np.zeros((BATCH_SIZE, 10))

    while True:
        for i in range(0, BATCH_SIZE):
            random_index = np.random.randint(len(X))
            X_batch[i] = X[random_index]
            y_batch[i] = y[random_index]
        yield X_batch, y_batch

Now, as it is, it trains, but it hangs at the last steps:

Using TensorFlow backend.
Epoch 1/3
2018-04-18 19:25:08.170609: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
199/200 [============================>.] - ETA: 0s - loss: 

Whereas if I don't use any generator:

def train(model, X_train, y_train, X_val, y_val):
    model.fit(
        X_train,
        y_train,
        batch_size=BATCH_SIZE,
        epochs=EPOCHS,
        verbose=1,
        validation_data=(X_val, y_val)
    )

    return model

It works perfectly.

Obviously my method get_next_batch is doing something wrong, but I can't figure out why.

Any help would be more than welcome!

Upvotes: 0

Views: 233

Answers (1)

Chris Farr
Chris Farr

Reputation: 3779

The problem is that you are creating a huge validation set in your generator function. Look where these arguments are passed...

    validation_data=get_next_batch(X_val, y_val),
    validation_steps=len(X_val)

Let's say your BATCH_SIZE is 1,000. So you are pulling 1,000 images, and running through them 1,000 times.

So 1,000 x 1,000 = 1,000,000. That's how many images would be running through your network and that will take a long time. You can change the steps to a static number as mentioned in the comments, I just thought an explanation would help put it in perspective.

Upvotes: 1

Related Questions