Reputation: 3
I am trying to resize the images in the cifar10 from Keras with this code
cifar10 = tf.keras.datasets.cifar10
(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()
train_images = tf.image.resize(train_images, (244, 244))
test_images = tf.image.resize(test_images, (244, 244))
However, when I run it with my cpu I get this error message
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor
with shape[50000,244,244,3] and type float on /job:localhost/replica:0/task:0/device:CPU:0
by allocator cpu [Op:ResizeBilinear]
Is there anyway to lower the memory usage of this resizing
Upvotes: 0
Views: 1052
Reputation: 8102
You are trying to keep the entire 60,000 resized images in memory which is causing the resource exhaust error. Not sure why you would want to resize the images since there is no more information in the larger image. If you really need to resize them you will have write them to a disk directory then read them in again. You can't read them ALL back in at the same time because you will get another resource exhaust error. To resize and save the images use code below
import cv2
import os
save_dir=r'c:\Temp\cifar\train'
if os.path.isdir(save_dir)==False:
os.mkdir(save_dir)
for i in range(train_images.shape[0] ):
file_name=save_dir + '\\' +str(i) + '.jpg'
resized = cv2.resize(train_images[i],(224,224), interpolation = cv2.INTER_AREA)
status=cv2.imwrite(file_name, resized)```
do the same for the test_images. You can then read them in as needed.
Upvotes: 0
Reputation: 1489
uint
array with shape[50000,244,244,3] will require more than 8 GB of memory, so OOM is quite expected. However, if you really need images of this size, you can resize them on-the-fly via generator function:
def resized_images_generator():
for image in train_images:
yield tf.image.resize(image, (244, 244))
Upvotes: 2