dwqy11
dwqy11

Reputation: 165

A strange behavior about pytorch tensor. Any one can explain it clear?

When I create a PyTorch tensor and tried to explore its type, I found this:

>>> a = torch.rand(3,5)
>>> a
tensor([[0.3356, 0.0968, 0.2185, 0.9843, 0.7846],
        [0.8523, 0.3300, 0.7181, 0.2692, 0.6523],
        [0.0523, 0.9344, 0.3505, 0.8901, 0.6464]])
>>> type(a)
<class 'torch.Tensor'>
>>> a = a.cuda()
>>> a.is_cuda
True
>>> type(a)
<class 'torch.Tensor'>
>>> a.dtype
torch.float32
>>> 

Why is type(a) still torch.Tensor rather than torch.cuda.Tensor, even though this tensor is already on GPU?

Upvotes: 3

Views: 195

Answers (1)

Szymon Maszke
Szymon Maszke

Reputation: 24681

You got me digging there, but apparaently type() as built-in method does not work for type detection since 0.4.0 (see this comment and this point in migration guide).

To get proper type, torch.Tensor classes have type() member, which can be simply used:

import torch

a = torch.rand(3, 5)
print(a)
print(a.type())
a = a.to("cuda")
print(a.is_cuda)
print(a.type())

which yields, as expected:

tensor([[0.5060, 0.6998, 0.5054, 0.4822, 0.4408],
        [0.7686, 0.4508, 0.4968, 0.4460, 0.7352],
        [0.1810, 0.6953, 0.7499, 0.7105, 0.1776]])
torch.FloatTensor
True
torch.cuda.FloatTensor

However I am unsure about the rationale standing behind the decision and could not find anything relevant other than that.

Upvotes: 1

Related Questions