s-m-e
s-m-e

Reputation: 3711

How to do a "element by element in-place inverse" with pytorch?

Given is an array a:

a = np.arange(1, 11, dtype = 'float32')

With numpy, I can do the following:

np.divide(1.0, a, out = a)

Resulting in:

array([1.        , 0.5       , 0.33333334, 0.25      , 0.2       ,
       0.16666667, 0.14285715, 0.125     , 0.11111111, 0.1       ],
      dtype=float32)

Assuming that a is instead a pytorch tensor, the following operation fails:

torch.div(1.0, a, out = a)

The first parameter of div is expected to be a tensor of matching length/shape.

If I substitute 1.0 with an array b filled with ones, its length equal to the length of a, it works. The downside is that I have to allocate memory for b. I can also do something like a = 1.0 / a which will yet again allocate extra (temporary) memory.

How can I do this operation efficiently "in-place" (without the allocation of extra memory), ideally with broadcasting?

Upvotes: 7

Views: 4805

Answers (1)

srmsoumya
srmsoumya

Reputation: 366

Pytorch follows the convention of using _ for in-place operations. for eg

add -> add_  # in-place equivalent
div -> div_  # in-place equivalent
etc

Element-by-element inplace inverse.

>>> a = torch.arange(1, 11, dtype=torch.float32) 
>>> a.pow_(-1) 
>>> a
>>> tensor([1.0000, 0.5000, 0.3333, 0.2500, 0.2000, 0.1667, 0.1429, 0.1250, 0.1111, 0.1000])

>>> a = torch.arange(1, 11, dtype=torch.float32) 
>>> a.div_(a ** a) 
>>> a
>>> tensor([1.0000, 0.5000, 0.3333, 0.2500, 0.2000, 0.1667, 0.1429, 0.1250, 0.1111, 0.1000])

Upvotes: 12

Related Questions