Larsupial
Larsupial

Reputation: 53

In Pytorch, is there a difference between (x<0) and x.lt(0)?

Suppose x is a tensor in Pytorch. One can either write:

x_lowerthanzero = x.lt(0)

or:

x_lowerthanzero = (x<0)

with seemingly the exact same results. Many other operations have Pytorch built-in equivalents: x.gt(0) for (x>0), x.neg() for -x, x.mul() etc.

Is there a good reason to use one form over the other?

Upvotes: 5

Views: 430

Answers (2)

iacob
iacob

Reputation: 24321

They are equivalent. < is simply a more readable alias.

Python operators have canonical function mappings e.g:

Algebraic operations

Operation Syntax Function
Addition a + b add(a, b)
Subtraction a - b sub(a, b)
Multiplication a * b mul(a, b)
Division a / b truediv(a, b)
Exponentiation a ** b pow(a, b)
Matrix Multiplication a @ b matmul(a, b)

Comparisons

Operation Syntax Function
Ordering a < b lt(a, b)
Ordering a <= b le(a, b)
Equality a == b eq(a, b)
Difference a != b ne(a, b)
Ordering a >= b ge(a, b)
Ordering a > b gt(a, b)

You can check that these are indeed mapped to the respectively named torch functions here e.g:

def __lt__(self, other):
    return self.lt(other)

Upvotes: 4

flawr
flawr

Reputation: 11628

Usually there is no reason for using one over the other, they are mostly for convenience: Many of those methods do have for instance an out argument, which lets you specify a tensor in which to save the result, but you can just as well do that using the operators instead of the methods.

Upvotes: 1

Related Questions