TomG
TomG

Reputation: 155

torch/rnn won't use CUDA

I'm trying to use the torch/rnn toolkit to run RNNs on my nVidia graphics card. I've got an Ubuntu 16.04 VM with the nVidia driver, CUDA toolkit, Torch, and cuDNN working. I can run the mnistCUDNN example and nvidia-smi shows it using the graphics card. In Torch, I can require('cunn'); and it loads happily.

BUT when I dofile('./rnn/examples/recurrent-visual-attention.lua' ); inside Torch, I get

{
   batchsize : 20
   cuda : false
   cutoff : -1
   dataset : "Mnist"
   device : 1
   earlystop : 200
   glimpseDepth : 1
   glimpseHiddenSize : 128
   glimpsePatchSize : 8
   glimpseScale : 2
   hiddenSize : 256
   id : "ptb:brain:1508585440:1"
   imageHiddenSize : 256
   locatorHiddenSize : 128
   locatorStd : 0.11
   lstm : false
   maxepoch : 2000
   maxnormout : -1
   minlr : 1e-05
   momentum : 0.9
   noTest : false
   overwrite : false
   progress : false
   rewardScale : 1
   saturate : 800
   savepath : "/home/tom/save/rmva"
   seqlen : 7
   silent : false
   startlr : 0.01
   stochastic : false
   trainsize : -1
   transfer : "ReLU"
   uniform : 0.1
   unitPixels : 13
   validsize : -1
   version : 13
}

and since cuda:false, it runs using just the CPU.

Any ideas how to work out what I've missed? Thanks.

Upvotes: 1

Views: 184

Answers (1)

TomG
TomG

Reputation: 155

I'm an idiot. When I finally worked up the courage to read the source code, I discovered that it doesn't automatically try to use CUDA. There's a -cuda flag to ask it to.

In my defence, the examples are undocumented...

Upvotes: 1

Related Questions