akshit.C
akshit.C

Reputation: 314

My google colab session is crashing due to excessive RAM usage

I'm training CNN with 2403 images 1280x720 px each. This the code that I'm running:

from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Activation,Dense,Flatten,Dropout
model = keras.Sequential()

model.add(Conv2D(32, (3, 3), input_shape=(1280,720,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(3))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    '/gdrive/MyDrive/shot/training',
    target_size=(1280, 720),
    batch_size=640,
    class_mode='categorical')
history = model.fit(
    train_generator,
    steps_per_epoch= 2403//640,
    epochs= 15,
)

The session is crashing before the first epoch. Is there anything that I can do to reduce RAM usage? What other alternatives do I have?

Upvotes: 2

Views: 10465

Answers (2)

igneous spark
igneous spark

Reputation: 47

there are multiple solutions available

  1. subscribe colab pro use RAM as per your requirement based on subscription.

  2. reduce batch size eg. batch_size=512 , 256, 64, 32, 16,8 based on working solution.

  3. use your small part of data or samples like train only few samples or chunk of your data eg for # Dividing dataset into small parts

    train_data = train_data.sample(n=8000, random_state=12).copy() train_data = train_data.reset_index(drop=True) test_data = test_data.sample(n=2014, random_state=12).copy() test_data = test_data.reset_index(drop=True)

    change according to your working solution.

  4. this step not guaranteed may be work or not so please do not take it seriously (i). runtime->manages sessions delete all session and open file again and second steps might work or not but try atleast one times (ii.). runtime -> disconnect and delete runtime and then restart system and reload colab page sometimes RAM is distributed in multiple part so follow this step. this sometimes works sometimes not its work for me that why i mentioned here.delete runtime

Upvotes: 0

Kaushal28
Kaushal28

Reputation: 5557

Seems like you are having a large batch size which is consuming all the RAM. So I suggest first try with smaller batch size like 32 or 64. Also your image sizes are too large, you can reduce it initially for experiments.

train_generator = train_datagen.flow_from_directory(
    '/gdrive/MyDrive/shot/training',
    target_size=(256, 256),  # -> Change the image size
    batch_size=32,  # -> Reduce batch size
    class_mode='categorical'
)

Upvotes: 5

Related Questions