Farshid Rayhan
Farshid Rayhan

Reputation: 1214

Pytorch can't convert np.ndarray of type numpy.object

I am trying to create a PyTorch data-loader with variable image size. Here is a snippet of my code

def get_imgs(path_to_imgs):

    imgs = []
    for path in path_to_imgs:

        imgs.append(cv2.imread(path))

    imgs = np.asarray(imgs)    

    return imgs   

The function above takes a list of paths and loads the images from the path to the list 'imgs'. BTW the images are not equal-sized. The list looks like imgs = [NumPy array, NumPy array ....]. However, when I convert the list to np.asarray it turns the list into dtype = object.

This is my dataloader class

class Dataset(torch.utils.data.Dataset):

  def __init__(self, path_to_imgs, path_to_label):
        'Initialization'
        self.path_to_imgs = path_to_imgs
        self.path_to_label = path_to_label

        self.imgs = get_imgs(path_to_imgs)
        self.label = get_pts(path_to_label)

        self.imgs = torch.Tensor(self.imgs)             **Error here
        # self.imgs = torch.from_numpy(self.imgs)       ** I tried this as well. Same error

        self.label = torch.Tensor(self.label)

        self.len = len(self.imgs)

  def __len__(self):
        'Denotes the total number of samples'
        return self.len

  def __getitem__(self, index):

        return self.imgs, self.label

When I try to convert the list of images to tensor** it fails giving the following error

can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.

I have looked similar questions here and here but they were not helpful.

Upvotes: 2

Views: 7305

Answers (1)

Alexey Golyshev
Alexey Golyshev

Reputation: 812

def get_imgs(path_to_imgs):

    imgs = []
    for path in path_to_imgs:
        imgs.append(torch.Tensor(cv2.imread(path)))

    return imgs
class Dataset(torch.utils.data.Dataset):
    def __init__(self, path_to_imgs, path_to_label):
        'Initialization'
        self.path_to_imgs = path_to_imgs
        self.path_to_label = path_to_label

        self.imgs = get_imgs(path_to_imgs)
        self.label = get_pts(path_to_label)

        # padding ops here (https://pytorch.org/docs/stable/nn.html#padding-layers)
        # for img in self.imgs:
        #     ...

        self.label = torch.Tensor(self.label)

        self.len = len(self.imgs)

    def __len__(self):
        'Denotes the total number of samples'
        return self.len

    def __getitem__(self, index):

        return self.imgs, self.label

Upvotes: 1

Related Questions