Reputation: 1106
I currently am working on getting a keras trained model working on a browser via tensorflow.js. I have to reduce the size of the image to 48x48 before passing into my model since all the data I trained on was that size. I've gone about reducing the size of my webcam snapshot to 48x48 using:
let imgclone = tf.image.resizeBilinear(imgmod, [48,48], true).toFloat();
This allows me to draw correctly to the canvas and get my realtime reduced-size webcam stream in a smaller canvas. I then went to pass the snapshot to my model and I am getting the below error:
expected conv2d_1_input to have shape [null,48,48,1] but got array with shape [1,48,48,3].
So I just wasn't sure on the proper way of getting my final shaped tensor would be when using tensorflow.js. I did try to use tf.reshape(preprocedimg, [null, 48, 48, 1]) but ofcourse this only modified the shape and provided an error that my size didnt match.
Going to continue scouring Google, but thought I would post here as well. Any info you might be able to provide would be greatly appreciated!
Upvotes: 2
Views: 1632
Reputation: 11
I had the same problem. I just want to share the solution I found.
tf.browser.fromPixels (pixels, numChannels?) numChannels default is 3(RGB), change it to 1 like below.
let tensor = tf.fromPixels(image,1)
I hope that helps to you as well.
TensorFlow document (https://js.tensorflow.org/api/latest/#browser.fromPixels)
Upvotes: 1
Reputation: 18371
Since you want to get a tensor of shape 48, 48, 1, you can use tf.slice
tensor.slice([0,0,0], [48, 48,1])
Upvotes: 1