dimension extension with pytorch tensors

What is the way dimension extension with pytorch tensors?

-before: torch.Size([3, 3, 3])

tensor([[[ 0.,  1.,  2.],
         [ 3.,  4.,  5.],
         [ 6.,  7.,  8.]],

        [[ 9., 10., 11.],
         [12., 13., 14.],
         [15., 16., 17.]],

        [[18., 19., 20.],
         [21., 22., 23.],
         [24., 25., 26.]]], device='cuda:0', dtype=torch.float64)

-after: torch.Size([2, 3, 3, 3])

tensor([[[[ 0.,  1.,  2.],
          [ 3.,  4.,  5.],
          [ 6.,  7.,  8.]],

         [[ 9., 10., 11.],
          [12., 13., 14.],
          [15., 16., 17.]],

         [[18., 19., 20.],
          [21., 22., 23.],
          [24., 25., 26.]]],


        [[[0., 1., 2.],
          [ 3.,  4.,  5.],
          [ 6.,  7.,  8.]],

         [[ 9., 10., 11.],
          [12., 13., 14.],
          [15., 16., 17.]],

         [[18., 19., 20.],
          [21., 22., 23.],
          [24., 25., 26.]]]], device='cuda:0', dtype=torch.float64)

under numpy would work like this:

b =  np.broadcast_to(a1[None, :,:,:], (2,3,3,3))

How does this work under pytorch? I want to take advantage of the gpu. Thanks in advance for the help!

Upvotes: 1

Views: 1116

Answers (2)

Dishin H Goyani
Dishin H Goyani

Reputation: 7693

We can use torch.Tensor.expand for your given expected results

b = a1.expand([2, 3, 3, 3])

Upvotes: 1

dannyadam
dannyadam

Reputation: 4170

A new dimension can be added with unsqeeze (0 used below to specify the first dimension, i.e., position 0), followed by repeating the data twice along that dimension (and once, i.e., no repetitions, along the other dimensions).

before = torch.tensor(..., dtype=torch.float64, device='cuda:0')
after = before.unsqueeze(0).repeat(2, 1, 1, 1)

Upvotes: 1

Related Questions