Marine Galantin
Marine Galantin

Reputation: 2279

Debugging neural network dropout problem for the probability not lying inside [0,1]

I tried to put a droprate to my neural network (NN) using torch and I got a strange error at the end. How can I fix it?

So the idea is that I wrote a NN inside a function to make it easier to call. The function is the following: (I personally think the problem lies inside the class of the NN, but for the sake of having a working example I'm putting everything).

def train_neural_network(data_train_X, data_train_Y, batch_size, learning_rate, graph = True, dropout = 0.0 ):
  input_size = len(data_test_X.columns)
  hidden_size = 200
  num_classes = 4
  num_epochs = 120
  batch_size = batch_size
  learning_rate = learning_rate

  # The class of NN
  class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes, p = dropout):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.fc2 = nn.Linear(hidden_size, hidden_size)
        self.fc3 = nn.Linear(hidden_size, num_classes)

    def forward(self, x, p = dropout):
          out = F.relu(self.fc1(x))
          out = F.relu(self.fc2(out))
          out = nn.Dropout(out, p) #drop
          out = self.fc3(out)
          return out

  # Prepare data
  X_train = torch.from_numpy(data_train_X.values).float()
  Y_train = torch.from_numpy(data_train_Y.values).float()

  # Loading data
  train = torch.utils.data.TensorDataset(X_train, Y_train)
  train_loader = torch.utils.data.DataLoader(train, batch_size=batch_size)

  net = NeuralNet(input_size, hidden_size, num_classes)

  # Loss
  criterion = nn.CrossEntropyLoss()

  # Optimiser
  optimiser = torch.optim.SGD(net.parameters(), lr=learning_rate)

  # Proper training
  total_step = len(train_loader)
  loss_values = []

  for epoch in range(num_epochs+1):
    net.train()

    train_loss = 0.0

    for i, (predictors, results) in enumerate(train_loader, 0):
      # Forward pass
      outputs = net(predictors)
      results = results.long()
      results = results.squeeze_()
      loss = criterion(outputs, results)

      # Backward and optimise
      optimiser.zero_grad()
      loss.backward()
      optimiser.step()

      # Update loss
      train_loss += loss.item()

    loss_values.append(train_loss / batch_size )
  print('Finished Training')

  return net

And when I call the function:

net = train_neural_network(data_train_X = data_train_X, data_train_Y = data_train_Y, batch_size = batch_size, learning_rate = learning_rate, dropout = 0.1)

The error is the following:

net = train_neural_network(data_train_X = data_train_X, data_train_Y = data_train_Y, batch_size = batch_size, learning_rate = learning_rate, dropout = 0.1)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/dropout.py in __init__(self, p, inplace)
      8     def __init__(self, p=0.5, inplace=False):
      9         super(_DropoutNd, self).__init__()
---> 10         if p < 0 or p > 1:
     11             raise ValueError("dropout probability has to be between 0 and 1, "
     12                              "but got {}".format(p))

RuntimeError: bool value of Tensor with more than one value is ambiguous

Why do you think there is an error?

Before putting the droprate, everything was working. Additional points for you if you know how to implement a bias inside my network! For example, on the hidden layer. I can't find any example online.

Upvotes: 1

Views: 537

Answers (1)

Nicolas Gervais
Nicolas Gervais

Reputation: 36624

Change your architecture for this:

class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes, p=dropout):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.fc2 = nn.Linear(hidden_size, hidden_size)
        self.fc3 = nn.Linear(hidden_size, num_classes)
        self.dropout = nn.Dropout(p=p)

    def forward(self, x):
        out = F.relu(self.fc1(x))
        out = F.relu(self.fc2(out))
        out = self.dropout(self.fc3(out))
        return out

Let me know if it works.

Upvotes: 2

Related Questions