Phylliade
Phylliade

Reputation: 1757

What is the difference between the predict and predict_on_batch methods of a Keras model?

According to the keras documentation:

predict_on_batch(self, x)
Returns predictions for a single batch of samples.

However, there does not seem to be any difference with the standard predict method when called on a batch, whether it being with one or multiple elements.

model.predict_on_batch(np.zeros((n, d_in)))

is the same as

model.predict(np.zeros((n, d_in)))

(a numpy.ndarray of shape (n, d_out)

Upvotes: 32

Views: 43596

Answers (3)

Kutay YILDIZ
Kutay YILDIZ

Reputation: 111

It seems predict_on_batch is a lot faster compared to predict if executed on a single batch.

  • batch & model information
    • batch shape: (1024, 333)
    • batch dtype: float32
    • model parameters: ~150k
  • timeit result:
    • predict: ~1.45 seconds
    • predict_on_batch: ~95.5 ms

In summary, predict method has extra operations to ensure a collection of batches are processed right, whereas, predict_on_batch is a lightweight alternative to predict that should be used on a single batch.

Upvotes: 8

Jorge E. Cardona
Jorge E. Cardona

Reputation: 95418

I just want to add something that does not fit in a comment. It seems that predict check carefully the output shape:

class ExtractShape(keras.engine.topology.Layer):
    def call(self, x):
        return keras.backend.sum(x, axis=0)
    def compute_output_shape(self, input_shape):
        return input_shape

a = keras.layers.Input((None, None))
b = ExtractShape()(a)
m = keras.Model(a, b)
m.compile(optimizer=keras.optimizers.Adam(), loss='binary_crossentropy')
A = np.ones((5,4,3))

Then:

In [163]: m.predict_on_batch(A)
Out[163]: 
array([[5., 5., 5.],
       [5., 5., 5.],
       [5., 5., 5.],
       [5., 5., 5.]], dtype=float32)
In [164]: m.predict_on_batch(A).shape
Out[164]: (4, 3)

But:

In [165]: m.predict(A)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-165-c5ba5fc88b6e> in <module>()

----> 1 m.predict(A)

~/miniconda3/envs/ccia/lib/python3.6/site-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps)
   1746         f = self.predict_function
   1747         return self._predict_loop(f, ins, batch_size=batch_size,
-> 1748                                   verbose=verbose, steps=steps)
   1749 
   1750     def train_on_batch(self, x, y,

~/miniconda3/envs/ccia/lib/python3.6/site-packages/keras/engine/training.py in _predict_loop(self, f, ins, batch_size, verbose, steps)
   1306                         outs.append(np.zeros(shape, dtype=batch_out.dtype))
   1307                 for i, batch_out in enumerate(batch_outs):
-> 1308                     outs[i][batch_start:batch_end] = batch_out
   1309                 if verbose == 1:
   1310                     progbar.update(batch_end)

ValueError: could not broadcast input array from shape (4,3) into shape (5,3)

I am not sure if this is a bug really.

Upvotes: 3

GPhilo
GPhilo

Reputation: 19123

The difference lies in when you pass as x data that is larger than one batch.

predict will go through all the data, batch by batch, predicting labels. It thus internally does the splitting in batches and feeding one batch at a time.

predict_on_batch, on the other hand, assumes that the data you pass in is exactly one batch and thus feeds it to the network. It won't try to split it (which, depending on your setup, might prove problematic for your GPU memory if the array is very big)

Upvotes: 40

Related Questions