Reputation: 728
Consider a case where a python module contains multiple functions. Each function takes an id
.
def f1(id):
log into file f1/{id}.txt
def f2(id):
log into file f2/{id}.txt
Assume the ids are always unique that are passed to each functions. Like if 1 is passed to f1
, 1 cant be requested again with f1
. Same with other functions.
I want logging per function not module. So that each function logs into unique file like function_name/id.txt
So after the function is executed there is no need to open the function_name/id.txt for logging by function because next request will contain different id. So file handlers to that file should be closed after the function is executed
How logging per module can be implemented in python so that all exceptions are caught properly per module?
I am trying this approach:
def setup_logger( name, log_file, level=logging.DEBUG ):
handler = logging.FileHandler(log_file)
handler.setFormatter(logging.Formatter('[%(asctime)s][%(levelname)s]%(message)s'))
logger = logging.getLogger(name)
logger.setLevel(level)
logger.addHandler(handler)
return logger
def f1(id):
logger = setup_logger('f1_id_logger', f'f1/{id}.txt', level=logging.DEBUG)
def f2(id):
logger = setup_logger('f2_id_logger', f'f2/{id}.txt', level=logging.DEBUG)
But my concerns are:
Upvotes: 4
Views: 3353
Reputation: 33714
This is a great case for using decorators.
import logging
from os import mkdir
from os.path import exists
from sys import exc_info # for retrieving the exception
from traceback import format_exception # for formatting the exception
def id_logger_setup(level=logging.DEBUG):
def setup_logger(func):
if not exists(func.__name__): # makes the directory if it doesn't exist
mkdir(func.__name__)
logger = logging.getLogger("{}_id_logger".format(func.__name__))
logger.setLevel(level)
def _setup_logger(id, *args, **kwargs):
handler = logging.FileHandler("{}/{}.txt".format(func.__name__, id)) # a unique handler for each id
handler.setFormatter(logging.Formatter("[%(asctime)s][%(levelname)s]%(message)s"))
logger.addHandler(handler)
try:
rtn = func(id, logger=logger, *args, **kwargs)
except Exception: # if the function breaks, catch the exception and log it
logger.critical("".join(format_exception(*exc_info())))
rtn = None
finally:
logger.removeHandler(handler) # remove ties between the logger and the soon-to-be-closed handler
handler.close() # closes the file handler
return rtn
return _setup_logger
return setup_logger
@id_logger_setup(level=logging.DEBUG) # set the level
def f1(id, *, logger):
logger.debug("In f1 with id {}".format(id))
@id_logger_setup(level=logging.DEBUG)
def f2(id, *, logger):
logger.debug("In f2 with id {}".format(id))
@id_logger_setup(level=logging.DEBUG)
def f3(id, *, logger):
logger.debug("In f3 with id {}".format(id))
logger.debug("Something's going wrong soon...")
int('opps') # raises an error
f1(1234)
f2(5678)
f1(4321)
f2(8765)
f3(345774)
From the code sample, you get the following:
f1 -
|
1234.txt
4321.txt
f2 -
|
5678.txt
8765.txt
f3 -
|
345774.txt
Where in the first four txt files you get something like this:
[2018-04-26 18:49:29,209][DEBUG]In f1 with id 1234
and in f3/345774.txt, you get:
[2018-04-26 18:49:29,213][DEBUG]In f3 with id 345774
[2018-04-26 18:49:29,213][DEBUG]Something's going wrong soon...
[2018-04-26 18:49:29,216][CRITICAL]Traceback (most recent call last):
File "/path.py", line 20, in _setup_logger
rtn = func(id, logger=logger, *args, **kwargs)
File "/path.py", line 43, in f3
int('opps')
ValueError: invalid literal for int() with base 10: 'opps'
Here are the answers to your questions:
Using decorators, you're only creating one logger. So no, one logger is enough for every function. Since your logger's is in this format "{func-name}_id_logger", which means that there must be a unique logger for every distinct function.
Yes, the logger will catch any exceptions that are a subclass of Exception. Although you exception will be caught regardless, you should still make an attempt at catching + handling the exception within the function.
No, it will be closed appropriately.
Upvotes: 2
Reputation: 19352
You shouldn't have to set up the loggers for each case separately. You should set them up once so that you have two loggers and each outputs to a different file. Then use the two different loggers in the two functions.
For example, you can configure the loggers this way*:
import logging.config
logging.config.dictConfig({
'version': 1,
'formatters': {
'simple_formatter': {
'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
}
},
'handlers': {
'first_handler': {
'class' : 'logging.FileHandler',
'formatter': 'simple_formatter',
'filename': 'C:\\Temp\\log1.txt'
},
'second_handler': {
'class' : 'logging.FileHandler',
'formatter': 'simple_formatter',
'filename': 'C:\\Temp\\log2.txt'
}
},
'loggers': {
'first_logger': {
'handlers': ['first_handler']
},
'second_logger': {
'handlers': ['second_handler']
}
}
})
Then, simply use one or the other logger where you need them:
def f1():
logger = logging.getLogger('first_logger')
logger.warning('Hello from f1')
def f2():
logger = logging.getLogger('second_logger')
logger.warning('Hello from f2')
*There are different ways to configure loggers, see https://docs.python.org/3.6/library/logging.config.html for other options.
Upvotes: 1