ArunJose
ArunJose

Reputation: 2159

differing results when using model to infer on a batch vs individual with pytorch

I have a neural network which takes input tensor of dimension (batch_size, 100, 1, 1) and produces an output tensor of dimension (batch_size, 3, 64, 64). I have differing results when using model to infer on a batch of two elements and on inferring on elements individually.

With the below code I initialize a pytorch tensor of dimension (2, 100, 1, 1). I pass this tensor through the model and I take the first element of the model output and store in variable result1. For result2 I just directly run the first element of my original input tensor through my model.

inputbatch=torch.randn(2, Z_DIM, 1, 1, device=device)
inputElement=inputbatch[0].unsqueeze(0)

result1=model(inputbatch)[0]
result2=model(inputElement)

My expectation was result1 and result2 would be same. But result1 and result2 are entirely different. Could anyone explain why the two outputs differ.

Upvotes: 5

Views: 1697

Answers (1)

Xxxo
Xxxo

Reputation: 1931

This is probably because your model has some random processes that are either training specific and you have not disable them (e.g. by using model.eval()) or needed at the model during inference.

To test the above, use:


model = model.eval()

before obtaining result1.

Upvotes: 9

Related Questions