Reputation: 101
I got this error after I executed my code and it seems that the below portion of the code is throwing this error. I tried different ways but nothing could solve it. The error is given by the loss function.
for i, data in enumerate(train_loader, 0):
# import pdb;pdb.set_trace()
inputs, labels = data
print(type(inputs))
for input in inputs:
inputs = torch.Tensor(input)
inputs, labels= Variable(inputs), Variable(labels)
inputs=inputs.unsqueeze(1)
optimizer.zero_grad()
outputs = net(inputs)
#import pdb;pdb.set_trace()
loss_size = loss(outputs, labels)
loss_size.backward()
optimizer.step()
running_loss += loss_size.data[0]
total_train_loss += loss_size.data[0]
if (i + 1) % (print_every + 1) == 0:
print("Epoch {}, {:d}% \t train_loss: {:.2f} took: {:.2f}s".format(
epoch+1, int(100 * (i+1) / n_batches), running_loss / print_every, time.time() - start_time))
running_loss = 0.0
start_time = time.time()
--------------------------------------------------------------------------- IndexError Traceback (most recent call
last) <ipython-input-10-7d1b8710defa> in <module>
1 CNN = Net()
----> 2 trainNet(CNN, learning_rate=0.001)
3 #test()
<ipython-input-7-3208c0794681> in trainNet(net, learning_rate)
23 outputs = net(inputs)
24 #import pdb;pdb.set_trace()
---> 25 loss_size = loss(outputs, labels)
26 loss_size.backward()
27 optimizer.step()
~\Documents\Anaconda3\lib\site-packages\torch\nn\modules\module.py in
__call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~\Documents\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in
forward(self, input, target)
914 def forward(self, input, target):
915 return F.cross_entropy(input, target, weight=self.weight,
--> 916 ignore_index=self.ignore_index, reduction=self.reduction)
917
918
~\Documents\Anaconda3\lib\site-packages\torch\nn\functional.py in
cross_entropy(input, target, weight, size_average, ignore_index,
reduce, reduction) 2019 if size_average is not None or reduce
is not None: 2020 reduction =
_Reduction.legacy_get_string(size_average, reduce)
-> 2021 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2022 2023
~\Documents\Anaconda3\lib\site-packages\torch\nn\functional.py in
nll_loss(input, target, weight, size_average, ignore_index, reduce,
reduction) 1836 .format(input.size(0),
target.size(0))) 1837 if dim == 2:
-> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target,
weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 2 is out of bounds.
IndexError: Target 2 is out of bounds.
Upvotes: 10
Views: 23278
Reputation: 5291
This happens in code like this:
for it, batch in enumerate(dataloaders[split]):
print(f'{it=}')
X, y = batch
print(f'{X.size()=}')
print(f'{y.size()=}')
print(f'{y=}')
y_pred = model(X)
print(f'{y_pred.size()=}')
# loss = criterion(y_pred, y)
# print(f'{loss=}')
The Cross Entropy loss must be infering what is the max label natural number value allowed based on the dimensionality of your y_pred - which is ~[vocab_size, num_data]
. If the target y
contains a natural number larger than vocab_size
it throws that error. You need to make sure your data is what you expect and your model predicts the values you expect. Vocab/classes are the things to look at.
Upvotes: 0
Reputation: 918
The problem is related to the value of the labels of your classes. You could find a good explanation and solution is provided here: IndexError: Target is out of bounds
Upvotes: 0
Reputation: 949
You should change number of classes = 3.
You are probably having 1 and 2 as class labels and so you must be trying to set number of outputs in our model net class as 2 but it should be 3 because this is the way pytorch works. 2 class means you are having 0 and 1 as class labels. But since you are having 1,2 as class labels you should make it a 3 class (0,1,2) classification problem.
Let says this is your net class:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer_1 = nn.Linear(100, 10)
self.layer_2 = nn.Linear(10, 2)
def forward(self, x):
x = self.layer_1(x)
x = nn.relu(x)
x = self.layer_2(x)
x = nn.relu(x)
return x
So, you just modify layer_2 as follows :
self.layer_2 = nn.Linear(10, 3)
This should work.
Upvotes: 5
Reputation: 69
I faced the same problem. The problem was solved by changing the number of classes.
num_classes = 10 (changed to the actual class number, instead of 1)
Upvotes: 6