Reputation: 959
Let's say I have a network model object called m
. Now I have no prior information about the number of layers this network has. How can create a for loop to iterate over its layer?
I am looking for something like:
Weight=[]
for layer in m._modules:
Weight.append(layer.weight)
Upvotes: 26
Views: 52862
Reputation: 24251
You can use the children
method:
for module in model.children():
# ...
Or, if you want to flatten Sequential
layers:
for module in model.modules():
if not isinstance(module, nn.Sequential):
# ...
Upvotes: 5
Reputation: 5109
you can do this too:
for name, m in mdl.named_children():
print(name)
print(m.parameters())
Reference:
# https://discuss.pytorch.org/t/how-to-get-the-module-names-of-nn-sequential/39682
# looping through modules but get the one with a specific name
import torch
import torch.nn as nn
from collections import OrderedDict
params = OrderedDict([
('fc0', nn.Linear(in_features=4,out_features=4)),
('ReLU0', nn.ReLU()),
('fc1L:final', nn.Linear(in_features=4,out_features=1))
])
mdl = nn.Sequential(params)
# throws error
# mdl['fc0']
for m in mdl.children():
print(m)
print()
for m in mdl.modules():
print(m)
print()
for name, m in mdl.named_modules():
print(name)
print(m)
print()
for name, m in mdl.named_children():
print(name)
print(m)
Upvotes: 3
Reputation: 2039
Assuming m
is your module, then you can do:
for layer in m.children():
weights = list(layer.parameters())
Upvotes: 8
Reputation: 61375
You can simply get it using model.named_parameters()
, which would return a generator which you can iterate on and get the tensors, its name and so on.
Here is the code for resnet pretrained model:
In [106]: resnet = torchvision.models.resnet101(pretrained=True)
In [107]: for name, param in resnet.named_parameters():
...: print(name, param.shape)
which would output
conv1.weight torch.Size([64, 3, 7, 7])
bn1.weight torch.Size([64])
bn1.bias torch.Size([64])
layer1.0.conv1.weight torch.Size([64, 64, 1, 1])
layer1.0.bn1.weight torch.Size([64])
layer1.0.bn1.bias torch.Size([64])
........
........ and so on
You can find some discussion on this topic in how-to-manipulate-layer-parameters-by-its-names/
Upvotes: 3
Reputation: 37721
Let's say you have the following neural network.
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# define the forward function
return x
Now, let's print the size of the weight parameters associated with each NN layer.
model = Net()
for name, param in model.named_parameters():
print(name, param.size())
Output:
conv1.weight torch.Size([6, 1, 5, 5])
conv1.bias torch.Size([6])
conv2.weight torch.Size([16, 6, 5, 5])
conv2.bias torch.Size([16])
fc1.weight torch.Size([120, 400])
fc1.bias torch.Size([120])
fc2.weight torch.Size([84, 120])
fc2.bias torch.Size([84])
fc3.weight torch.Size([10, 84])
fc3.bias torch.Size([10])
I hope you can extend the example to fulfill your needs.
Upvotes: 19