Aniket Vishwakarma
Aniket Vishwakarma

Reputation: 87

'Net' object has no attribute 'parameters'

I am fairly new to machine learning. I learned to write this code from youtube tutorials but I keep getting this error

Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/Users/aniket/Desktop/DeepLearning/PythonLearningPyCharm/CatVsDogs.py", line 109, in <module>
    optimizer = optim.Adam(net.parameters(), lr=0.001) # tweaks the weights from what I understand
AttributeError: 'Net' object has no attribute 'parameters'

this is the Net class

class Net():
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1,32,5)
        self.conv2 = nn.Conv2d(32,64,5)
        self.conv3 = nn.Conv2d(64,128,5)
        self.to_linear = None
        x = torch.randn(50,50).view(-1,1,50,50)
        self.Conv2d_Linear_Link(x)
        self.fc1 = nn.Linear(self.to_linear, 512)
        self.fc2 = nn.Linear(512, 2)

    def Conv2d_Linear_Link(self , x):
        x = F.max_pool2d(F.relu(self.conv1(x)),(2,2))
        x = F.max_pool2d(F.relu(self.conv2(x)),(2,2))
        x = F.max_pool2d(F.relu(self.conv3(x)),(2,2))

        if self.to_linear is None :
            self.to_linear = x[0].shape[0]*x[0].shape[1]*x[0].shape[2]
        return x

    def forward(self, x):
        x = self.Conv2d_Linear_Link(x)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return F.softmax(x, dim=1)

and this is the function train

def train():
    for epoch in range(epochs):
        for i in tqdm(range(0,len(X_train), batch)):
            batch_x = train_X[i:i + batch].view(-1, 1, 50, 50)
            batch_y = train_y[i:i + batch]
            net.zero_grad() # i don't understand why we do this but we do we don't want the probabilites adding up
            output = net(batch_x)
            loss = loss_function(output, batch_y)
            loss.backward()
            optimizer.step()
        print(loss)

and the optimizer and loss functions and data

optimizer = optim.Adam(net.parameters(), lr=0.001) # tweaks the weights from what I understand
loss_function = nn.MSELoss() # gives the loss

Upvotes: 4

Views: 12741

Answers (3)

Taeef Najib
Taeef Najib

Reputation: 77

You need to import optim from torch

from torch import optim

Upvotes: -3

Karl
Karl

Reputation: 5303

You're not subclassing nn.Module. It should look like this:

class Net(nn.Module):
    def __init__(self):
        super().__init__()

This allows your network to inherit all the properties of the nn.Module class, such as the parameters attribute.

Upvotes: 10

mahdi nezhadasad
mahdi nezhadasad

Reputation: 86

You may have a spelling problem and you should look to Net which parameters has.

Upvotes: 1

Related Questions