margolisd
margolisd

Reputation: 29

Wrong Number of Init Arguments for Tanh in Pytorch

For a homework assignment, I am implementing a simple neural network in Python using Pytorch. Here is my network class:

class Net(torch.nn.Module):
    def __init__(self, layer_dims, activation="sigmoid"):
        super(Net, self).__init__()
        layers = []
        if activation == 'sigmoid':
            for i in range(1, len(layer_dims) - 1):
                layers.append(nn.Sigmoid(layer_dims[i - 1], layer_dims[i]))
                layers.append(nn.Sigmoid(layer_dims[i - 1]))
            layers.append(nn.Sigmoid(layer_dims[-2], layer_dims[-1]))
            layers.append(nn.Sigmoid())
        elif activation == 'relu':
            for i in range(1, len(layer_dims) - 1):
                layers.append(nn.ReLu(layer_dims[i - 1], layer_dims[i]))
                layers.append(nn.ReLU(layer_dims[i - 1]))
            layers.append(nn.ReLu(layer_dims[-2], layer_dims[-1]))
            layers.append(nn.ReLu())
        elif activation == 'tanh':
            for i in range(1, len(layer_dims) - 1):
                layers.append(nn.Tanh(layer_dims[i - 1], layer_dims[i]))
                layers.append(nn.Tanh(layer_dims[i - 1]))
            layers.append(nn.Tanh(layer_dims[-2], layer_dims[-1]))
            layers.append(nn.Tanh())
        elif activation == 'identity':
            for i in range(1, len(layer_dims) - 1):
                layers.append(nn.Identity(layer_dims[i - 1], layer_dims[i]))
                layers.append(nn.Identity(layer_dims[i - 1]))
            layers.append(nn.Identity(layer_dims[-2], layer_dims[-1]))
            layers.append(nn.Identity())

        self.out = nn.Sequential(*layers)

    def forward(self, input):
        return self.out(input)

def train(data, labels, n, l, activation='sigmoid'):
    if activation not in ['sigmoid','identity','tanh','relu']:
        return
    net = Net([l for i in range(0,n)], activation)
    optim = torch.optim.Adam(net.parameters())
    for i in range(0,5):        
        ypred = net.forward(torch.Tensor(data))
        ypred.backward()
        optim.step()
        optim.zero_grad()
    ypred = net.forward(torch.Tensor(data))
    return (net, torch.nn.CrossEntropyLoss(ypred, labels), net.parameters(), ypred)

When testing this, I have been trying to run the following code segment:

for i in range(3,5):
    for num in [10,30,50]:
        print(train(data.get('X_trn'), data.get('y_trn'), i, num, activation='tanh'))

Which is erroring out with a TypeError, saying that init() takes 1 positional argument when 3 is given.

<ipython-input-30-376b6c739a71> in __init__(self, layer_dims, activation)
     18         elif activation == 'tanh':
     19             for i in range(1, len(layer_dims) - 1):
---> 20                 layers.append(nn.Tanh(layer_dims[i - 1], layer_dims[i]))
     21                 layers.append(nn.Tanh(layer_dims[i - 1]))
     22             layers.append(nn.Tanh(layer_dims[-2], layer_dims[-1]))

TypeError: __init__() takes 1 positional argument but 3 were given

This error has appeared when I switch the activation function as well. I am unsure what the issue is, because as far as I know, when you create a layer you need to give the input and output dimensions, which is what I have. Any help sorting this out would be appreciated.

Upvotes: 1

Views: 284

Answers (1)

Zabir Al Nazi Nabil
Zabir Al Nazi Nabil

Reputation: 11198

The error clearly says, Tanh only takes 1 argument, a tensor.

From documentation, https://pytorch.org/docs/stable/nn.html

Tanh

class
torch.nn.Tanh
[source]

    Applies the element-wise function:
    Tanh(x)=tanh⁡(x)=ex−e−xex+e−x\text{Tanh}(x) = \tanh(x) = \frac{e^x - e^{-x}} {e^x + e^{-x}} Tanh(x)=tanh(x)=ex+e−xex−e−x​

    Shape:

            Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions

            Output: (N,∗)(N, *)(N,∗) , same shape as the input


You have so many mistakes it's hard to fix them all, you also didn't give a data sample.

Activation functions accept a single tensor, you are passing two random list elements. Usually, you can use torch.cat to concatenate two tensors. I would suggest you start with a simpler model and read the documentation first.

Upvotes: 2

Related Questions