Reputation: 5025
My objective is to introduce random scaling and translation for data augmentation purposes.
distorted_image = tf.image.resize_images(distorted_image, random_scale, random_scale)
distorted_image = tf.image.crop_to_bounding_box(distorted_image, random_y, random_x, 299, 299)
This fails with 'image' must be fully defined.
Swapping the lines works, but doesn't do what I really need.
distorted_image = tf.image.crop_to_bounding_box(distorted_image, random_y, random_x, 299, 299)
distorted_image = tf.image.resize_images(distorted_image, random_scale, random_scale)
So it seems like resize_images loses the shape of the image tensor and then crop_to_bouding_box
fails. Is this on purpose, am I missing something? How come random_crop
works after a resize but crop_to_bounding_box
doesn't?
Upvotes: 1
Views: 1942
Reputation: 126194
The tf.image.resize_images()
op does set the image shape, on this line of the implementation. (This was added in TensorFlow 0.7.)
However, if either of the new_height
or new_width
arguments is a dynamic value, then TensorFlow cannot infer a single shape for that dimension, and so uses None
for that dimension. I notice in your code that the new height and width values are called random_scale
: if a new random value is drawn on each step, then the shape will have None
for the height and width dimensions.
Note that in this case, the tf.image.crop_to_bounding_box()
op will not work because—as the error message indicates—the current implementation requires that the shape of the input be fully defined. As I noted in a recent answer, the best workaround might be to use the lower level ops from which tf.image.crop_to_bounding_box()
is implemented (in particular tf.slice()
with computed indices).
Upvotes: 3