Gericault
Gericault

Reputation: 229

Stacking copies of an array/ a torch tensor efficiently?

I'm a Python/Pytorch user. First, in numpy, let's say I have an array M of size LxL, and i want to have the following array: A=(M,...,M) of size, say, NxLxL, is there a more elegant/memory efficient way of doing it than :

A=np.array([M]*N) ?

Same question with torch tensor ! Cause, Now, if M is a Variable(torch.tensor), i have to do:

A=torch.autograd.Variable(torch.tensor(np.array([M]*N))) 

which is ugly !

Upvotes: 9

Views: 20249

Answers (3)

Multihunter
Multihunter

Reputation: 5948

If you don't mind creating new memory:

  • In numpy, you can use np.repeat() or np.tile(). With efficiency in mind, you should choose the one which organises the memory for your purposes, rather than re-arranging after the fact:
    • np.repeat([1, 2], 2) == [1, 1, 2, 2]
    • np.tile([1, 2], 2) == [1, 2, 1, 2]
  • In pytorch, you can use tensor.repeat(). Note: This matches np.tile, not np.repeat.

If you don't want to create new memory:

  • In numpy, you can use np.broadcast_to(). This creates a readonly view of the memory.
  • In pytorch, you can use tensor.expand(). This creates an editable view of the memory, so operations like += will have weird effects.

Upvotes: 4

mbpaulus
mbpaulus

Reputation: 7691

Note, that you need to decide whether you would like to allocate new memory for your expanded array or whether you simply require a new view of the existing memory of the original array.

In PyTorch, this distinction gives rise to the two methods expand() and repeat(). The former only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any dimension of size 1 can be expanded to an arbitrary value without allocating new memory. In contrast, the latter copies the original data and allocates new memory.

In PyTorch, you can use expand() and repeat() as follows for your purposes:

import torch

L = 10
N = 20
A = torch.randn(L,L)
A.expand(N, L, L) # specifies new size
A.repeat(N,1,1) # specifies number of copies

In Numpy, there are a multitude of ways to achieve what you did above in a more elegant and efficient manner. For your particular purpose, I would recommend np.tile() over np.repeat(), since np.repeat() is designed to operate on the particular elements of an array, while np.tile() is designed to operate on the entire array. Hence,

import numpy as np

L = 10
N = 20
A = np.random.rand(L,L)
np.tile(A,(N, 1, 1))

Upvotes: 16

hpaulj
hpaulj

Reputation: 231665

In numpy repeat is faster:

np.repeat(M[None,...], N,0)

I expand the dimensions of the M, and then repeat along that new dimension.

Upvotes: -1

Related Questions