walker
walker

Reputation: 767

why Netron render BatchNorm2d layer as bias on my model?

below is my demo code, just to simply show I've written a batch_norm layer, and when I export the corresponding model to onnx file and use Netron to render the network, I found that the BN layer is missing, since I disable the bias, I can see the bias still exists.

after a few modify of the code I confirm that the bias showed in the Netron app is the BN because when I delete the BN layer and disable bias, the b section disappled.

the Netron app can render the model I downloaded from internet correctly, so it's can't be the app's problem, but what's wrong in my code?

class myModel(nn.Module):
    def __init__(self):
        super().__init__()
        self.layers = nn.Sequential(
            nn.Conv2d(3, 20, 3, stride=2, bias=False),
            nn.Conv2d(20, 40, 3, stride=2, bias=False),
            nn.BatchNorm2d(40),
            nn.ReLU(inplace=True),
            nn.Flatten(),
            nn.Linear(1000, 8) # 24x24x3 12x12x20 5x5x40=1000
        )

    def forward(self, x):
        return self.layers(x)

m = myModel()
torch.onnx.export(m, (torch.ones(1,3,24,24),), 'test.onnx')

here is the capture, BatchNorm disappeared and bias shows image


update: when I delete all conv layers, the batchnorm shows:

image2

Upvotes: 1

Views: 241

Answers (2)

Gaslight Deceive Subvert
Gaslight Deceive Subvert

Reputation: 20418

During ONNX export some optimization passes are applied that fuses 2d conv layers followed by 2d batch norm layers. Netron renders the model correctly, but the exporter is behaving unexpected.

You can preserve batch norm layers by first setting the mode of the model to training:

net.train()
torch.onnx.export(
    net, x, "net.onnx",
    do_constant_folding = False
)

Upvotes: 0

walker
walker

Reputation: 767

it's a version specific problem, and if I switch the order bn and relu, it will render the bn layer normally.

Upvotes: 0

Related Questions