Shamoon
Shamoon

Reputation: 43531

Why can't I append a PyTorch tensor with torch.cat?

I have:

import torch

input_sliced = torch.rand(180, 161)
output_sliced = torch.rand(180,)

batched_inputs = torch.Tensor()
batched_outputs = torch.Tensor()

print('input_sliced.size', input_sliced.size())
print('output_sliced.size', output_sliced.size())

batched_inputs = torch.cat((batched_inputs, input_sliced))
batched_outputs = torch.cat((batched_outputs, output_sliced))

print('batched_inputs.size', batched_inputs.size())
print('batched_outputs.size', batched_outputs.size())

This outputs:

input_sliced.size torch.Size([180, 161])
output_sliced.size torch.Size([180])

batched_inputs.size torch.Size([180, 161])
batched_outputs.size torch.Size([180])

I need the batched ones to append, but the torch.cat isn't working. What am I doing wrong?

Upvotes: 1

Views: 19278

Answers (1)

Berriel
Berriel

Reputation: 13611

Assuming you're doing it in a loop, I'd say it is better to do like this:

import torch

batch_input, batch_output = [], []
for i in range(10):  # assuming batch_size=10
    batch_input.append(torch.rand(180, 161))
    batch_output.append(torch.rand(180,))

batch_input = torch.stack(batch_input)
batch_output = torch.stack(batch_output)

print(batch_input.shape)   # output: torch.Size([10, 180, 161])
print(batch_output.shape)  # output: torch.Size([10, 180])

If you know the resulting batch_* shape a priori, you can preallocate the final Tensor and simply assign each sample into their corresponding positions in the batch. It would be more memory efficient.

Upvotes: 5

Related Questions