Aleph
Aleph

Reputation: 217

Efficient pytorch broadcasting not found

I have the following code snippet in my implemenatation. There is a nested for loop with 3 loops. In the main code the 3D coordinates of the original system is stacked as a 1D vector of constinous stacking of points as for a point with coordinate (x,y,z) a sample cells will look like

Predictions =[...x,y,z,...]

whereas for my calulation I need reshaped_prediction vector as a 2D matrix with prediction_reshaped[i][0]=x, prediction_reshaped[i][1]=y prediction_reshaped[i][2]=z where i is any sample row in the matrix prediction_reshaped. The following code shows the logic

prediction_reshaped=torch.zeros([batch,num_node,dimesion])
    for i in range(batch):
        for j in range(num_node):
            for k in range(dimesion):
                prediction_reshaped[i][j][k]=prediction[i][3*j+k]

is their any efficient broadcasting to avoid these three nested loop? it is slowing down my code. torch.reshape does not suit my purpose. The code is implemented using pytorch with all matrices as pytorch tensor but any numpy solution will also help.

Upvotes: 0

Views: 72

Answers (1)

swag2198
swag2198

Reputation: 2696

This should do the job.

import torch
batch = 2
num_nodes = 4
x = torch.rand(batch, num_nodes * 3)
# tensor([[0.8076, 0.2572, 0.7100, 0.4180, 0.6420, 0.4668, 0.8915, 0.0366,  0.5704,
#          0.0834, 0.3313, 0.9080],
#         [0.2925, 0.7367, 0.8013, 0.4516, 0.5470, 0.5123, 0.1929, 0.4191,  0.1174,
#          0.0076, 0.2864, 0.9151]])
x = x.reshape(batch, num_nodes, 3)
# tensor([[[0.8076, 0.2572, 0.7100],
#         [0.4180, 0.6420, 0.4668],
#         [0.8915, 0.0366, 0.5704],
#         [0.0834, 0.3313, 0.9080]],
#
#        [[0.2925, 0.7367, 0.8013],
#         [0.4516, 0.5470, 0.5123],
#         [0.1929, 0.4191, 0.1174],
#         [0.0076, 0.2864, 0.9151]]])

Upvotes: 1

Related Questions