Reputation: 3572
I load a Keras
model from .json and .hdf5 files. When I call model.evaluate()
, it returns an error:
You must compile a model before training/testing. Use `model.compile(optimizer, loss)
Why do I need to compile to run evaluate()
?
To add, the model can be passed predict()
with no problem.
Upvotes: 23
Views: 38674
Reputation: 86650
Because evaluate
will calculate the loss function and the metrics.
You don't have any of them until you compile the model. They're parameters to the compile method:
model.compile(optimizer=..., loss=..., metrics=...)
On the other hand, predict
doesn't evaluate any metric or loss, it just passes the input data through the model and gets its output.
You need the "loss" for training too, so you can't train without compiling. And you can compile a model as many times as you want, and even change the parameters.
The outputs and the loss function:
The model's outputs depend on it being defined with weights. That is automatic and you can predict
from any model, even without any training. Every model in Keras is already born with weights (either initialized by you or randomly initialized)
You input something, the model calculates the output. At the end of everything, this is all that matters. A good model has proper weights and outputs things correctly.
But before getting to that end, your model needs to be trained.
Now, the loss function takes the current output and compares it with the expected/true result. It's a function supposed to be minimized. The less the loss, the closer your results are to the expected. This is the function from which the derivatives will be taken so the backpropagation algorithm can update the weights.
The loss function is not useful for the final purpose of the model, but it's necessary for training. That's probably why you can have models without loss functions (and consequently, there is no way to evaluate them).
Upvotes: 33
Reputation: 17
To add to @Daniel Möller's great answer, recompiling the model also re-enables the (custom) metrics
you used to monitor validation loss on or want to calculate now on test data with a simple model.evaluate
call. This makes sure you use exactly the same metrics on your test data.
If you pass along y_test
, this would even allow calculating the loss
on the test samples, which often gets reported in research papers.
Upvotes: 0
Reputation: 455
I know you're asking why, and I believe the answer above should suffice. However if you get this error, which I did, it was simply because I had a coding error. I copied model_1, and pasted it to create model_2. However, I forgot to change part of the code from model_1 to model_2. This was a bonehead move on my part, but I got the same exact error as stated above. See below picture:
Upvotes: 0