Abhishek Gangwar
Abhishek Gangwar

Reputation: 1767

Pytorch Inferencing form the model is giving me different results every time

I have created and trained one very simple network in pytorch as shown below:

self.task_layers[task][task_layer_key]; TaskLayerManager(
  (taskLayers): ModuleList(
    (0): lc_hidden(
      (dropout_layer): Dropout(p=0.0, inplace=False)
      (layer_norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
    )
    (1): cnn(
      (cnn_layer): CNN_Text(
        (dropout): Dropout(p=0.1, inplace=False)
        (fc1): Linear(in_features=300, out_features=2, bias=True)
        (convs1): ModuleList(
          (0): Conv2d(1, 300, kernel_size=(5, 768), stride=(1, 1), padding=(4, 0))
        )
      )
    )
  )
)

Layer descriptions:
taskLayers.0.linear_weights      torch.Size([13])
taskLayers.0.layer_norm.weight      torch.Size([768])
taskLayers.0.layer_norm.bias      torch.Size([768])
taskLayers.1.cnn_layer.fc1.weight      torch.Size([2, 300])
taskLayers.1.cnn_layer.fc1.bias      torch.Size([2])
taskLayers.1.cnn_layer.convs1.0.weight      torch.Size([300, 1, 5, 768])
taskLayers.1.cnn_layer.convs1.0.bias      torch.Size([300])

It is a binary classification network that take a 3d tensor as input [N,K,768] and gives output [N,2] tensor I am not able to figure out "Why at every run it is giving me different results"? Please help me with this - I am new to pytorch. And let me know if any other information is needed.

Upvotes: 1

Views: 1912

Answers (1)

iacob
iacob

Reputation: 24161

I suspect this is due to you not having set the model to inference mode with

model.eval()

If you don't do this, your dropout layer(s) will remain activated and randomly dropout p proportion of neurons on each call.

Upvotes: 3

Related Questions