aysebilgegunduz
aysebilgegunduz

Reputation: 880

RuntimeError: expected scalar type Double but found Float

I'm a newbie in PyTorch and I got the following error from my cnn layer: "RuntimeError: expected scalar type Double but found Float". I converted each element into .astype(np.double) but the error message remains. Then after converting Tensor tried to use .double() and again the error message remains. Here is my code for a better understanding:

import torch.nn as nn
class CNN(nn.Module):
    
    # Contructor
    def __init__(self, shape):
        super(CNN, self).__init__()
        self.cnn1 = nn.Conv1d(in_channels=shape, out_channels=32, kernel_size=3)
        self.act1 = torch.nn.ReLU()
    # Prediction
    def forward(self, x):
        x = self.cnn1(x)
        x = self.act1(x)
    return x
    
    X_train_reshaped = np.zeros([X_train.shape[0],int(X_train.shape[1]/depth),depth])
    
    for i in range(X_train.shape[0]):
        for j in range(X_train.shape[1]): 
            X_train_reshaped[i][int(j/3)][j%3] = X_train[i][j].astype(np.double)
    
    X_train = torch.tensor(X_train_reshaped)
    y_train = torch.tensor(y_train)
    
    # Dataset w/o any tranformations
    train_dataset_normal = CustomTensorDataset(tensors=(X_train, y_train), transform=None)
    train_loader = torch.utils.data.DataLoader(train_dataset_normal, shuffle=True, batch_size=16)
    
    model = CNN(X_train.shape[1]).to(device)
    
    # Loss and optimizer
    criterion = nn.CrossEntropyLoss()
    optimizer = torch.optim.Adam(model.parameters())
    
    # Train the model
    #how to implement batch_size??
    for epoch in range(epochno):
        #for i, (dataX, labels) in enumerate(X_train_reshaped,y_train):
        for i, (dataX, labels) in enumerate(train_loader):
            dataX = dataX.to(device)
            labels = labels.to(device)
            
            # Forward pass
            outputs = model(dataX)
            loss = criterion(outputs, labels)
            
            # Backward and optimize
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            
            if (i+1) % 100 == 0:
                print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' 
                       .format(epoch+1, num_epochs, i+1, total_step, loss.item()))

And following is the error I received:

RuntimeError                              Traceback (most recent call last)
<ipython-input-39-d99b62b3a231> in <module>
     14 
     15         # Forward pass
---> 16         outputs = model(dataX.double())
     17         loss = criterion(outputs, labels)
     18 

~\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

<ipython-input-27-7510ac2f1f42> in forward(self, x)
     22     # Prediction
     23     def forward(self, x):
---> 24         x = self.cnn1(x)
     25         x = self.act1(x)

~\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

~\torch\nn\modules\conv.py in forward(self, input)
    261 
    262     def forward(self, input: Tensor) -> Tensor:
--> 263         return self._conv_forward(input, self.weight, self.bias)
    264 
    265 

~\torch\nn\modules\conv.py in _conv_forward(self, input, weight, bias)
    257                             weight, bias, self.stride,
    258                             _single(0), self.dilation, self.groups)
--> 259         return F.conv1d(input, weight, bias, self.stride,
    260                         self.padding, self.dilation, self.groups)
    261 

RuntimeError: expected scalar type Double but found Float

Upvotes: 7

Views: 11375

Answers (3)

user22290209
user22290209

Reputation: 1

Encountered the error and do not have the score for a comment.

more info on why data.float() is the correct solution may be found by Kuvalekar: "RuntimeError: expected scalar type Double but found Float" in Pytorch CNN training He stated that "that error is actually refe[r]ring to the weights of the conv layer which are in float32 by default when the matrix multiplication is called"

As my version of the error was encountered storing the model, converting the model using model.double() may also be a possible, and probably worse, solution.

Upvotes: 0

aysebilgegunduz
aysebilgegunduz

Reputation: 880

I don't know It's me or Pytorch but the error message is trying to say convert into float somehow. Therefore inside forward pass I resolved the problem by converting dataX to float as following: outputs = model(dataX.float())

Upvotes: 6

Y.Z.
Y.Z.

Reputation: 31

Agree with aysebilgegunduz. It should be Pytorch's problem as I also encounter the same error message.

Simply change the type to the other type solves the problem.

You can check the type of input tensor by:

data.type()

Some helpful functions to change type:

data.float()
data.double()
data.long()

Upvotes: 3

Related Questions