fstab
fstab

Reputation: 5029

exception using the `apply` method on a `CudaTensor`

This problem emerged only after trying to use torch's CUDA capabilities instead of relying to the CPU.

I'm trying to initialize a CudaTensor of weights of a convolutional neural network. The function is the following:

function fill_0normal(t,sigma)
  t:apply(function() return torch.normal(0,sigma) end)
end

and it's invoked in the following way:

fill_0normal(m.weight, sigma)

with m being a convolutional module, m.weight being a CudaTensor and sigma is a floating point value.

The exception that I get is the following:

/hpc/sw/torch7-2016.02.09/bin/luajit: invalid arguments: number number 
expected arguments: *CudaTensor* [float] [float]
stack traceback:
    [C]: at 0x2aaaaf63e040
    [C]: in function 'func'
    /hpc/sw/torch7-2016.02.09/share/lua/5.1/torch/FFI.lua:117: in function 'apply'
    /hpc/sw/torch7-2016.02.09/share/lua/5.1/cutorch/Tensor.lua:3: in function 'apply'
    setup_model.lua:4: in function 'fill_0normal'
    setup_model.lua:16: in function 'init_conv'
    setup_model.lua:43: in function 'init_module'
    setup_model.lua:90: in function 'initializeNetRandomly'
    assignment3-cifar10.lua:49: in main chunk
    [C]: in function 'dofile'
    .../torch7-2016.02.09/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
    [C]: at 0x00406010

Any ideas on what might cause it?

I also tried to convert the value returned by torch.normal into a 1-element CudaTensor, but it didn't help.

Upvotes: 0

Views: 238

Answers (1)

smhx
smhx

Reputation: 2266

You can directly call :normal on the tensor.

function fill_0normal(t,sigma) do
  t:normal(0, sigma)
end

Upvotes: 2

Related Questions