paime
paime

Reputation: 3542

PyTorch conv2d doesn't propagate torch.channels_last memory format

When I try to use the torch.nn.conv2d operator on tensors which have the channels_last memory format, then the output does not keep this memory format.

I don't understand why, since conv2d is listed in the PyTorch wiki listing operators with channels last support.

Am I missing something?

Code to reproduce (tested with pytorch 1.6.0 and 1.7.0, on CPU, on Ubuntu 20.04):

import torch
import torch.nn.functional as F

N, C, H, W = 10, 4, 64, 128
out_channels = 2
kernel_size = (3, 3)

memory_format = torch.channels_last

tsr = torch.randn(N, C, H, W).to(memory_format=memory_format)
kernel = torch.randn(out_channels, C, *kernel_size).to(memory_format=memory_format)

conv_out = F.conv2d(tsr, kernel)

print(conv_out.is_contiguous(memory_format=memory_format)) # False

Upvotes: 1

Views: 1433

Answers (1)

Ivan
Ivan

Reputation: 40618

The conv2d operator is listed under the list of GPU operators supporting channels_last. This is not true for the CPU version of conv2d:

If you switch to cuda device, it will return True:

tsr = torch.randn(N, C, H, W).to('cuda', memory_format=memory_format)
kernel = torch.randn(out_channels, C, *kernel_size).to('cuda', memory_format=memory_format)
conv_out = F.conv2d(tsr, kernel)

>>> conv_out.is_contiguous(memory_format=memory_format)
True

Upvotes: 2

Related Questions