Reputation: 307
I have just begun learning Machine learning and am using Tensorflow 1.14. I have just created my first model using tensorflow.keras
using the inbuilt tensorflow.keras.datasets.mnist
dataset. Here is the code for my model:
import tensorflow as tf
from tensorflow import keras
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
class Stopper(keras.callbacks.Callback):
def on_epoch_end(self, epoch, log={}):
if log.get('acc') >= 0.99:
self.model.stop_training = True
print('\nReached 99% Accuracy. Stopping Training...')
model = keras.Sequential([
keras.layers.Flatten(),
keras.layers.Dense(1024, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(
optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
x_train, x_test = x_train / 255, x_test / 255
model.fit(x_train, y_train, epochs=10, callbacks=[Stopper()])
Now that the model is trained, I can feed the x_test
images into model.predict()
and that works fine. But I was wondering how to feed my own images (JPG and PNG) into my model's predict()
method?
I have looked at the documentation and their method results in an error for me. In particular I tried the following:
img_raw = tf.read_file(<my file path>)
img_tensor = tf.image.decode_image(img_raw)
img_final = tf.image.resize(img_tensor, [192, 192])
^^^ This line throws error 'ValueError: 'images' contains no shape.'
Please provide a step by step guide for getting an image (JPG and PNG) into my model for a prediction. Thank you very much.
Upvotes: 1
Views: 9829
Reputation: 16926
from PIL import Image
img = Image.open("image_file_path").convert('L').resize((28, 28), Image.ANTIALIAS)
img = np.array(img)
model.predict(img[None,:,:])
You have trained your model with images of size (28 X 28), so have to resize your image to the same. You cannot use the images of a different dimension.
Predict requires a batch of images but since you want to make a prediction on a single image you have to add an additional dimension of the batch for this single image. This is done by expand_dim
or reshape
or img[None,:,:]
Upvotes: 3
Reputation: 1
Every image fundamentally is made of pixels, you can pass these pixel values over to your neural network.
To convert the image into an array of pixels you can use libraries like skimage as follows.
from skimage.io import imread
imagedata=imread(imagepath)
#you can pass this image to the model
To read group of images loop them over and store that data in an array. Also you will have to resize to normalise all the pictures to load them into your NN.
resized_image = imagedata.resize(preferred_width, preferred_height, Image.ANTIALIAS)
You can also choose to convert the image to black and white to reduce the number of computations, I am using pillow library, a common image preprocessing library here to apply the black and white filter
from PIL import Image
# load the image
image = Image.open('opera_house.jpg')
# convert the image to grayscale
gs_image = image.convert(mode='L')
The order of preprocessing can be
1. convert images to black and white
2. resize the images
3. convert them into numpy array using imread
Upvotes: 0