pavel
pavel

Reputation: 398

Torch linear model forward pass 4 times slower on GPU then CPU

I am working on one of the AWS GPU instances using torch 7. The following code benchmarks a simple forward pass of a linear model. The gpu execution seems to be about 4 times slower. What am I doing wrong?

require 'torch';
require 'nn';

cmd = torch.CmdLine()
cmd:option("-gpu", 0) -- gpu/cpu
cmd:option("-n_in", 100)
cmd:option("-n_out", 100)
cmd:option("-n_iter", 1000)

params = cmd:parse(arg)
A = torch.Tensor():randn(params.n_in);
model = nn.Sequential():add(nn.Linear(params.n_in, params.n_out))

if params.gpu>0 then
    require 'cutorch';
    require 'cudnn';
    A = A:cuda()
    model = model:cuda()
end

timer = torch.Timer()

for i=1,params.n_iter do
    A2 = model:forward(A)
end
print("Average time:" .. timer:time().real/params.n_iter)

Upvotes: 1

Views: 497

Answers (1)

kangshiyin
kangshiyin

Reputation: 9779

You need sufficient large network to fully utilize the GPU. For small network (< 500 x 500), overhead including GPU kernel launching, data transfer through PCI-E, etc. will take a great portion of the training time. In this case you may want to use CPU instead.

Upvotes: 3

Related Questions