user131379
user131379

Reputation: 321

Why evaluate test data in a loop in tensorflow?

A few models implemented in TensorFlow posted on Github have their 'evaluation' function run a while loop, such as resnet (in resnet_main.py), etc.

I wonder why we need to evaluate the test data more than once.

Upvotes: 1

Views: 585

Answers (2)

P-Gn
P-Gn

Reputation: 24651

The test data is evaluated once: the loop is on its samples. The reason is rather mundane: when the test data is large, it cannot be processed as a whole because the whole network could not fit in memory. In that case, it is split in minibatches.

So even though training and testing loop on minibatches, the underlying reasons are indeed quite different.

EDIT

The outer loop has a different role: a new model is loaded at each iteration. This is used in case you run the evaluation in a different process that regularly reads the output of training on disk and evaluate them.

The rationale is explained here: it is useful when you are working in an environment where you can have training and testing occurring on different GPUs.

Upvotes: 1

nessuno
nessuno

Reputation: 27050

You don't evaluate the test data more than once. You evaluate disjunct subsets of the test set in order to obtain the evaluation of the union of those subsets (that whole test set)

Upvotes: 0

Related Questions