csi
csi

Reputation: 240

how to continue training of a pretrained model using lasagne

I trained a network with 1000 iterations and would like to continue this training up to 2000 iterations without starting from the beginning. I read different approches to this problem and wrote the code below, so at the end I have my parameters in 'saved_params'. But I don't get what I have to do from now on with these parameters.

Can someone explain me how to go on with that? How do I get these parameters to my training process?

from __future__ import print_function
import numpy as np
import theano
import lasagne
import pickle


input_var=None
ini = lasagne.init.HeUniform()

l_in = lasagne.layers.InputLayer(shape=(None, 1, 120, 120), input_var=input_var)
b= np.zeros((1, 4), dtype=theano.config.floatX)
b = b.flatten()

loc_l1 = lasagne.layers.MaxPool2DLayer(l_in, pool_size=(2, 2))
loc_l2 = lasagne.layers.Conv2DLayer(loc_l1, num_filters=20, filter_size=(5, 5), W=ini)
loc_l3 = lasagne.layers.MaxPool2DLayer(loc_l2, pool_size=(2, 2))
loc_l4 = lasagne.layers.Conv2DLayer(loc_l3, num_filters=20, filter_size=(5, 5), W=ini)
loc_l5 = lasagne.layers.DenseLayer(loc_l4, num_units=50, W=lasagne.init.HeUniform('relu'))
network = lasagne.layers.DenseLayer(loc_l5, num_units=4, b=b, W=lasagne.init.Constant(0.0), nonlinearity=lasagne.nonlinearities.identity)


def save_network(filename,param_values):
    f = open(filename, 'wb')
    pickle.dump(param_values,f,protocol=-1)
    f.close()

def load_network(filename):
    f = open(filename, 'rb')
    param_values = pickle.load(f)
    f.close()
    return param_values


save_network("model.npz",lasagne.layers.get_all_param_values(network))

saved_params = load_network("model.npz")
lasagne.layers.set_all_param_values(network, saved_params)

Upvotes: 1

Views: 292

Answers (3)

Max Krappmann
Max Krappmann

Reputation: 520

This code is only an example. It is doing the following thing: 1. Load the trained weights from the last training 2. Use same test train data (otherwise you trained on test data) 3. start the fit method of your network (net_loaded.fit(parameters)) which is using the loaded weights of the model

To get a graph out of this cascade you have to save the values of your accuracy over epoch graph or what ever you are using to visualize the combined results.

Upvotes: 0

Max Krappmann
Max Krappmann

Reputation: 520

if(load):
        net1 = Lenet(classes, num_epochs)
        net1.load_weights_from('Lenet.npz')
        network = net1
        train_X = np.float32(train_X)
        print("train_x",train_X)
        print("train_y",train_Y)
        train_Y = np.int16(train_Y)
        network = net1.fit(train_X, train_Y, num_epochs)
        print ("Loading weights successfully done.")

Upvotes: 0

Max Krappmann
Max Krappmann

Reputation: 520

You can just use load and afterwards call fit method or have you changed the parameters? If you want a graph then save your error for the 1000 epochs

Upvotes: 0

Related Questions