Yan Tijin
Yan Tijin

Reputation: 181

The result is not fixed after setting random seed in pytorch

def setup_seed(seed):
    np.random.seed(seed)
    random.seed(seed)
    torch.manual_seed(seed)  # cpu
    torch.cuda.manual_seed_all(seed)  
    torch.backends.cudnn.deterministic = True  
    torch.backends.cudnn.benchmark = True  

I set random seed when run the code, but I can not get fixed result with pytorch. Besides, I use batchnorm in my code. When evaluate and test, I have set model.eval(). I cannot figure out the reason for that.

Upvotes: 3

Views: 2990

Answers (1)

Girish Hegde
Girish Hegde

Reputation: 1515

I think the line torch.backends.cudnn.benchmark = True causing the problem. It enables the cudnn auto-tuner to find the best algorithm to use. For example, convolution can be implemented using one of these algorithms:

     CUDNN_CONVOLUTION_FWD_ALGO_GEMM,
     CUDNN_CONVOLUTION_FWD_ALGO_FFT,
     CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING,
     CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM,
     CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM,
     CUDNN_CONVOLUTION_FWD_ALGO_DIRECT,
     CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD,
     CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED,

There are several algorithms without reproducibility guarantees.

So use torch.backends.cudnn.benchmark = False for deterministic outputs(this may slow execution time).

And also there are some pytorch functions which cannot be deterministic refer this doc.

Upvotes: 4

Related Questions