Reputation: 357
from torch import FloatTensor
def new_parameter(*size): #1024
out = torch.nn.Parameter(FloatTensor(*size), requires_grad=True)
torch.nn.init.xavier_normal_(out)
return out
at = new_parameter(1024, 1)
output is
Parameter containing:
tensor([[ 0.0203],
[-0.0043],
[-0.0386],
...,
[-0.0084],
[-0.0289],
[-0.0188]], requires_grad=True)
similarway we can create
bt=torch.randn((1024,1),requires_grad=True)
output also same
tensor([[-1.5478],
[ 1.5060],
[ 0.1580],
...,
[ 0.9754],
[ 0.1699],
[ 0.2062]], requires_grad=True)
are there any differences in a tensor variable the above two ways? please explain the above code in simply
Upvotes: 1
Views: 1130
Reputation: 40738
The first method will initialize a random float tensor, then wrap it with nn.Parameter
. Which is generally used to register than tensor as a parameter to a nn.Module
(not seen here). A utility function nn.init.xavier_normal_
is then applied on that parameter to initialize its values.
The second method only initializes a random float tensor.
Upvotes: 1