Reputation: 3
so I am experimenting with PyTorch library to train a CNN. There is nothing wrong with the model (I can feed forward a data w/ no error) and I prepare a custom dataset with DataLoader function.
This is my code for data prep (I've omitted some irrelevant variable declaration, etc.):
# Initiliaze model
class neural_net_model(nn.Module):
# omitted
...
# Prep the dataset
train_data = torchvision.datasets.ImageFolder(root = TRAIN_DATA_PATH, transform = TRANSFORM_IMG)
train_data_loader = data_utils.DataLoader(train_data, batch_size = BATCH_SIZE, shuffle = True)
test_data = torchvision.datasets.ImageFolder(root = TEST_DATA_PATH, transform = TRANSFORM_IMG)
test_data_loader = data_utils.DataLoader(test_data, batch_size = BATCH_SIZE, shuffle = True)
But, in the training code (which I follow based on various online references), there is an error when I feed forward the model with this instruction:
...
for step, (data, label) in enumerate(train_data_loader):
outputs = neural_net_model(data)
...
Which raise an error:
NotImplementedError Traceback (most recent call last)
<ipython-input-12-690cfa6916ec> in <module>
6
7 # Forward pass
----> 8 outputs = neural_net_model(images)
9 loss = criterion(outputs, labels)
10
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
487 result = self._slow_forward(*input, **kwargs)
488 else:
--> 489 result = self.forward(*input, **kwargs)
490 for hook in self._forward_hooks.values():
491 hook_result = hook(self, input, result)
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in forward(self, *input)
83 registered hooks while the latter silently ignores them.
84 """
---> 85 raise NotImplementedError
86
87 def register_buffer(self, name, tensor):
NotImplementedError:
I can't find a similar problems on the internet and it seems strange because I've followed the code much exactly as the references and the error is not really well defined in the docs (NotImplementedError:)
Do you guys know the cause and solution to this problem?
from torch import nn, from_numpy
import torch
import torch.nn.functional as F
class DeXpression(nn.Module):
def __init__(self, ):
super(DeXpression, self).__init__()
# Layer 1
self.convolution1 = nn.Conv2d(in_channels = 1, out_channels = 64, kernel_size = 7, stride = 2, padding = 3)
self.pooling1 = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 0)
# Layer FeatEx1
self.convolution2a = nn.Conv2d(in_channels = 64, out_channels = 96, kernel_size = 1, stride = 1, padding = 0)
self.convolution2b = nn.Conv2d(in_channels = 96, out_channels = 208, kernel_size = 3, stride = 1, padding = 1)
self.pooling2a = nn.MaxPool2d(kernel_size = 3, stride = 1, padding = 1)
self.convolution2c = nn.Conv2d(in_channels = 64, out_channels = 64, kernel_size = 1, stride = 1, padding = 0)
self.pooling2b = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 0)
# Layer FeatEx2
self.convolution3a = nn.Conv2d(in_channels = 272, out_channels = 96, kernel_size = 1, stride = 1, padding = 0)
self.convolution3b = nn.Conv2d(in_channels = 96, out_channels = 208, kernel_size = 3, stride = 1, padding = 1)
self.pooling3a = nn.MaxPool2d(kernel_size = 3, stride = 1, padding = 1)
self.convolution3c = nn.Conv2d(in_channels = 272, out_channels = 64, kernel_size = 1, stride = 1, padding = 0)
self.pooling3b = nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 0)
# Fully-connected Layer
self.fc1 = nn.Linear(45968, 1024)
self.fc2 = nn.Linear(1024, 64)
self.fc3 = nn.Linear(64, 8)
def net_forward(self, x):
# Layer 1
x = F.relu(self.convolution1(x))
x = F.local_response_norm(self.pooling1(x), size = 2)
y1 = x
y2 = x
# Layer FeatEx1
y1 = F.relu(self.convolution2a(y1))
y1 = F.relu(self.convolution2b(y1))
y2 = self.pooling2a(y2)
y2 = F.relu(self.convolution2c(y2))
x = torch.zeros([y1.shape[0], y1.shape[1] + y2.shape[1], y1.shape[2], y1.shape[3]])
x[:, 0:y1.shape[1], :, :] = y1
x[:, y1.shape[1]:, :, :] = y2
x = self.pooling2b(x)
y1 = x
y2 = x
# Layer FeatEx2
y1 = F.relu(self.convolution3a(y1))
y1 = F.relu(self.convolution3b(y1))
y2 = self.pooling3a(y2)
y2 = F.relu(self.convolution3c(y2))
x = torch.zeros([y1.shape[0], y1.shape[1] + y2.shape[1], y1.shape[2], y1.shape[3]])
x[:, 0:y1.shape[1], :, :] = y1
x[:, y1.shape[1]:, :, :] = y2
x = self.pooling3b(x)
# Fully-connected layer
x = x.view(-1, x.shape[0] * x.shape[1] * x.shape[2] * x.shape[3])
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.log_softmax(self.fc3(x), dim = None)
return x
Upvotes: 0
Views: 1378
Reputation: 114786
Your network class implemented a net_forward
method. However, nn.Module
expects its derived classes to implement forward
method (without net_
prefix).
Simply rename net_forward
to just forward
and your code should be okay.
You can learn more about inheritance and overloaded methods here.
Old Answer:
The code you are running, and the code you post are not the same.
You posted a code:
for step, (data, label) in enumerate(train_data_loader): neural_net_model(data)
While the code you run (as it appears in the error message posted) is:
# Forward pass outputs = model(images)
The error you get indicates that model
to which you feed images
is of class nn.Module
and not an actual implementation derived from nn.Module
. Therefore, the actual model
you are trying to use has no explicit implementation of forward
method. Make sure you are using the actual model you implemented.
Upvotes: 1