TulakHord
TulakHord

Reputation: 432

Chainer Autoencoder

I am trying to write a vanilla autoencoder for compressing 13 images. However I am getting the following error:

ValueError: train argument is not supported anymore. Use chainer.using_config

The shape of images is (21,28,3).

filelist = 'ex1.png', 'ex2.png',...11 other images
x = np.array([np.array(Image.open(fname)) for fname in filelist])
xs = x.astype('float32')/255.

class Autoencoder(Chain):
  def __init__(self, activation=F.relu):
    super().__init__()
    with self.init_scope():
  # encoder part
      self.l1 = L.Linear(1764,800)
      self.l2 = L.Linear(800,300)
  # decoder part
      self.l3 = L.Linear(300,800)
      self.l4 = L.Linear(800,1764)
      self.activation = activation

  def forward(self,x):
      h = self.encode(x)
      x_recon = self.decode(h)
      return x_recon

  def __call__(self,x):
      x_recon = self.forward(x)
      loss = F.mean_squared_error(h, x)
      return loss

  def encode(self, x, train=True):
      h = F.dropout(self.activation(self.l1(x)), train=train)
      return self.activation(self.l2(x))

  def decode(self, h, train=True):
      h = self.activation(self.l3(h))
      return self.l4(x)

n_epoch = 5
batch_size = 2
model = Autoencoder()

optimizer = optimizers.SGD(lr=0.05).setup(model)
train_iter = iterators.SerialIterator(xs,batch_size)
valid_iter = iterators.SerialIterator(xs,batch_size)

updater = training.StandardUpdater(train_iter,optimizer)
trainer = training.Trainer(updater,(n_epoch,"epoch"),out="result")

from chainer.training import extensions
trainer.extend(extensions.Evaluator(valid_iter, model, device=gpu_id))

trainer.run()

Is the issue because of the number of nodes in the model or otherwise?

Upvotes: 0

Views: 117

Answers (1)

corochann
corochann

Reputation: 1624

You need to wirte "decoder" part.

When you take mean_squared_error loss, the shape of h and x must be same. AutoEncoder will encode original x to small space (100-dim) h, but after that we need to reconstruct x' from this h by adding decoder part. Then loss can be calculated on this reconstructed x'.

For example, as follows (sorry i have not test it to run)

  • For Chainer v2~

train argument is handled by global configs, so you do not need train argument in dropout function.

class Autoencoder(Chain):
  def __init__(self, activation=F.relu):
    super().__init__()
    with self.init_scope():
      # encoder part
      self.l1 = L.Linear(1308608,500)
      self.l2 = L.Linear(500,100)
      # decoder part
      self.l3 = L.Linear(100,500)
      self.l4 = L.Linear(500,1308608)
  self.activation = activation

  def forward(self,x):
      h = self.encode(x)
      x_recon = self.decode(h)
      return x_recon

  def __call__(self,x):
      x_recon = self.forward(x)
      loss = F.mean_squared_error(h, x)
      return loss

  def encode(self, x):
      h = F.dropout(self.activation(self.l1(x)))
      return self.activation(self.l2(x))

  def decode(self, h, train=True):
      h = self.activation(self.l3(h))
      return self.l4(x)
  • For Chainer v1
class Autoencoder(Chain):
  def __init__(self, activation=F.relu):
    super().__init__()
    with self.init_scope():
      # encoder part
      self.l1 = L.Linear(1308608,500)
      self.l2 = L.Linear(500,100)
      # decoder part
      self.l3 = L.Linear(100,500)
      self.l4 = L.Linear(500,1308608)
  self.activation = activation

  def forward(self,x):
      h = self.encode(x)
      x_recon = self.decode(h)
      return x_recon

  def __call__(self,x):
      x_recon = self.forward(x)
      loss = F.mean_squared_error(h, x)
      return loss

  def encode(self, x, train=True):
      h = F.dropout(self.activation(self.l1(x)), train=train)
      return self.activation(self.l2(x))

  def decode(self, h, train=True):
      h = self.activation(self.l3(h))
      return self.l4(x)

You can also refer official Variational Auto Encoder example for the next step:

Upvotes: 2

Related Questions