user41986
user41986

Reputation: 81

MXNet interfering with Python logging

I want each epoch information to be stored in a log file to see the accuracy versus epoch, but I am not able to log. Why?

mnist = mx.test_utils.get_mnist()
batch_size = 100

print(os.getcwd())

log_file = '1.log'#''process_fold_' + str(0) + '_trial_' + str(1) + '.log'
logging.basicConfig(format='%(asctime)s %(levelname)s - %(message)s', datefmt='%d/%m/%Y %I:%M:%S %p', filename=log_file, level=logging.INFO)
logging.info('Started training on fold {} at trial {}'.format(0, 0))

train_iter = mx.io.NDArrayIter(mnist['train_data'],mnist['train_label'], batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'],
                             batch_size)  # important as the prediction need not have equal barch size
lenet =get_my_net()
# create a trainable module on GPU 0
lenet_model = mx.mod.Module(symbol=lenet, context=mx.gpu(),logger=logzulu)
# train with the same
lenet_model.fit(train_iter,
                eval_data=val_iter,
                optimizer='sgd',
                optimizer_params={'learning_rate':0.1},
                eval_metric='acc',
                batch_end_callback = mx.callback.Speedometer(batch_size, 100),
                num_epoch=10,
                initializer=mx.init.Xavier(rnd_type='gaussian', factor_type="in", magnitude=2))

test_iter = mx.io.NDArrayIter(mnist['test_data'], None, batch_size)

Upvotes: 1

Views: 556

Answers (2)

user41986
user41986

Reputation: 81

Found out that mxnet has inbuilt logger and it writes to the logfile 'process_fold_0_trial_0.txt' accuracy on each epoch. To use it you have to initialize it and then give the object as the logger parameter.

initializing logger

current_fold  =0
current_trial =0
logfilenamer = 'process_fold_' + str(current_fold) + '_trial_' +str(current_trial)+'.log'
logzulu = mxnet.log.get_logger(name='log_it',filename=logfilenamer,level=mxnet.log.DEBUG,filemode='w')
logzulu.error('what the hell ' + time.strftime('%x %X'))

continuation

#  1. iterartors
train_iter = mx.io.NDArrayIter(mnist['train_data'],mnist['train_label'], batch_size, shuffle=True)
val_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'],
                             batch_size)  
# 2.Getting my net 
lenet =get_my_net()
# create a trainable module on GPU 0

give the logger object as the logger

lenet_model = mx.mod.Module(symbol=lenet, context=mx.gpu(),logger=logzulu)

Upvotes: 1

Thom Lane
Thom Lane

Reputation: 1063

Are you running this code through a Jupyter notebook?

If so, you have to configure the logging slightly differently. You'll see in logging.basicConfig:

This function does nothing if the root logger already has handlers configured for it.

And Jupyter notebooks configure handler before you have the chance to. So try something like;

import logging
logger = logging.getLogger()
fhandler = logging.FileHandler(filename='example.log', mode='a')
formatter = logging.Formatter('%(asctime)s %(levelname)s - %(message)s')
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.setLevel(logging.INFO)

Upvotes: 0

Related Questions