Reputation: 1
I have been wanting to get into torch
and started with this tutorial. However I ran into a stack overflow when running the code specifically with the setmetatable
function. I believe that this is happening because of the large 50000 image input file but I might be wrong. I have tried editing the luaconf.h
file to try to fix it to no avail. Other than that I am running torch
with Lua 5.2
and without iTorch
as I had trouble setting it up.
Here is the error:
/home/student/torch/install/bin/lua: C stack overflow
stack traceback:
[C]: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
...
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:56: in function '__index'
Documents/TorchImageRecognition.lua:66: in main chunk
[C]: in function 'dofile'
...dent/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: in ?
Otherwise my code should be the same as in the tutorial from 1. Load and normalize the data to 4. Train the neural network
Here's my code, sorry for not initially having it.
require 'torch'
require 'nn'
require 'paths'
if (not paths.filep("cifar10torchsmall.zip")) then
os.execute('wget -c https://s3.amazonaws.com/torch7/data/cifar10torchsmall.zip')
os.execute('unzip cifar10torchsmall.zip')
end
trainset = torch.load('cifar10-train.t7')
testset = torch.load('cifar10-test.t7')
classes = {'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'}
print(trainset)
print(#trainset.data)
--itorch.image(trainset.data[100])
--print(classes[trainset.label[100]])
-- -- -- -- -- -- -- -- -- -- --
-- This code is from the previous parts of the tutorial
--net = nn.Sequential()
--net:add(nn.SpatialConvolution(1, 6, 5, 5))
--neecognitiont:add(nn.ReLU())
--net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
--net:add(nn.SpatialConvolution(6, 16, 5, 5))
--net:add(nn.ReLU())
--net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
--net:add(nn.View(16*5*5))
--net:add(nn.Linear(16*5*5, 120))
--net:add(nn.ReLU())
--net:add(nn.Linear(120, 84))
--net:add(nn.ReLU())
--net:add(nn.Linear(84, 10))
--net:add(nn.LogSoftMax())
--print('Lenet5\n' .. net:__tostring())
--input = torch.rand(1, 32, 32)
--output = net:forward(input)
--print(output)
--net:zeroGradParameters()
--gradInput = net:backward(input, torch.rand(10))
--print(#gradInput)
--criterion = nn.ClassNLLCriterion()
--criterion:forward(output, 3)
--gradients = criterion:backward(output, 3)
--gradInput = net:backward(input, gradients)
--m= nn.SpatialConvolution(1, 3, 2, 2)
--print(m.weight)
--print(m.bias)
-- -- -- -- -- -- -- -- --
setmetatable(trainset, {__index = function(t, i)
return {t.data[i], t.lable[i]}
end})
trainset.data = trainset.data:double()
function trainset:size()
return self.data:size(1)
end
print(trainset:size())
print(trainset[33])
redChannel = trainset.data[{ {}, {1}, {}, {} }]
print(#redChannel)
mean = {}
stdv = {}
for i=1,3 do
mean[i] = trainset.data[{ {}, {i}, {}, {} }]:mean()
print('Channel ' .. i .. ', Mean: ' .. mean[i])
trainset.data[{ {}, {i}, {}, {} }]:add(-mean[i])
stdv[i] = trainset.data[{ {}, {i}, {}, {} }]:std()
print('Channel ' .. i .. ', Standard Deviation: ' .. stdv[i])
trainset.data[{ {}, {i}, {}, {} }]:div(stdv[i])
end
net = nn.Sequential()
net:add(nn.SpatialConvolution(3, 6, 5, 5))
net:add(nn.ReLU())
net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
net:add(nn.SpatialConvolution(6, 16, 5, 5))
net:add(nn.ReLU())
net:add(nn.SpatialMaxPooling(2, 2, 2, 2))
net:add(nn.View(16*5*5))
net:add(nn.Linear(16*5*5, 120))
net:add(nn.ReLU())
net:add(nn.Linear(120, 84))
net:add(nn.ReLU())
net:add(nn.Linear(84, 10))
net:add(nn.LogSoftMax())
criterion = nn.ClassNLLCriterion()
trainer = nn.StochasticGradient(net, criterion)
trainer.learningRate = 0.001
trainer.maxIteration = 5
trainer:train(trainset)
Upvotes: 0
Views: 91