Reputation: 323
I am using the patchify library to create patches of a bigger .jpg image. I am using the following code, taken from this YT video: https://www.youtube.com/watch?v=7IL7LKSLb9I&ab_channel=DigitalSreeni
When the YT guy reads his images (12 tiff images) he gets the following size for the large_image_stack variable: (12, 768, 1024), i.e. 12 images each of those is 768x1024.
I have a single jpg image of 3000x4000 and the size I am getting for large_image_stack variable is (3000, 4000, 3). So then I run the code...
import numpy as np
from matplotlib import pyplot as plt
from patchify import patchify
import cv2
large_image_stack = cv2.imread("test.jpg")
for img in range(large_image_stack.shape[0]):
large_image = large_image_stack[img]
patches_img = patchify(large_image, (224,224), step=224)
for i in range(patches_img.shape[0]):
for j in range(patches_img.shape[1]):
single_patch_img = patches_img[i,j,:,:]
cv2.imwrite('patches/images/' + 'image_' + str(img)+ '_' + str(i)+str(j)+ '.jpg', single_patch_img)
But I am getting the following error:
ValueError: window_shape
is too large
Looking in the view_as_windows.py from the patchify library I see the following:
arr_shape = np.array(arr_in.shape)
window_shape = np.array(window_shape, dtype=arr_shape.dtype)
if ((arr_shape - window_shape) < 0).any():
raise ValueError("`window_shape` is too large")
And as I am quite new in these things I can't get this error solved.
Any help would be very appreciated!!
Upvotes: 3
Views: 7696
Reputation: 31
As @OlegRuskiy said, you need to use a window of (224,224,3)
, where 3 is your number of channels.
something he didn't mention but is correct in his code is that as this kernel goes through your depth dimension, you'll get one extra depth dimension.
for me it was (16, 24, 1, 256, 256, 3)
instead of (16, 24, 256, 256, 3)
so I used np.squeeze()
to remove that 1
,
so the output was (16, 24, 256, 256, 3)
for file in files:
img=io.imread(path+file)
patches = patchify(img,(256,256,3),step=256-32) #32 is the number of desiered overlapping pixels
print(patches.shape)
patches = np.squeeze(patches)
print(patches.shape)
for i in range(patches.shape[0]):
for j in range(patches.shape[1]):
patch = patches[i,j,:,:,:]
io.imsave(opath+ file.split('.')[0] + '_r'+ str(i).zfill(2) + '_c' + str(j).zfill(2) + '.jpg', patch)
Upvotes: 1
Reputation: 323
I figured out how to solve the issue, as it was a simple error. Basically, I only have one image, so it does not make sense to go through images with the for
loop.
Then, for the image itself, as it is BGR, is necessary to modify the array that represents the patch size so it should be (224,224,3)
.
Finally, to save the patches, I use the corrected code provided by @Rotem in another question I made.
This is how the final result looks like:
img = cv2.imread("test.jpg")
patches_img = patchify(img, (224,224,3), step=224)
for i in range(patches_img.shape[0]):
for j in range(patches_img.shape[1]):
single_patch_img = patches_img[i, j, 0, :, :, :]
if not cv2.imwrite('patches/images/' + 'image_' + '_'+ str(i)+str(j)+'.jpg', single_patch_img):
raise Exception("Could not write the image")
Upvotes: 3