Reputation: 501
I want to create two log files, one that logs everything and another that logs, just warnings and errors.
Here is my code,
import logging
logger = logging.getLogger(__name__)
# custom log handlers
err_handler = logging.FileHandler(filename='error.log')
info_handler = logging.FileHandler(filename='info.log')
# setting levels of the handlers
err_handler.setLevel(logging.WARNING)
info_handler.setLevel(logging.INFO)
# formatting for handlers
err_formatter = logging.Formatter('%(name)s - %(asctime)s - %(levelname)s - %(funcName)s - %(message)s')
info_formatter = logging.Formatter('%(name)s - %(asctime)s - %(levelname)s - %(funcName)s - %(message)s')
# setting the formatters
err_handler.setFormatter(err_formatter)
info_handler.setFormatter(info_formatter)
# add the handlers to the custom logger
logger.addHandler(err_handler)
logger.addHandler(info_handler)
logger.info('test_info')
logger.warning('test_warn')
logger.error('test_err')
logger.info('test_info')
The output files are:
info.log__main__ - 2019-08-22 15:13:36,625 - WARNING - <module> - test_warn
__main__ - 2019-08-22 15:13:36,625 - ERROR - <module> - test_err
error.log
__main__ - 2019-08-22 15:13:36,625 - WARNING - <module> - test_warn
__main__ - 2019-08-22 15:13:36,625 - ERROR - <module> - test_err
Why are the info logs not showing?
Upvotes: 3
Views: 207
Reputation: 77902
Loggers have a concept of effective level. If a level is not explicitly set on a logger, the level of its parent is used instead as its effective level. If the parent has no explicit level set, its parent is examined, and so on - all ancestors are searched until an explicitly set level is found. The root logger always has an explicit level set (WARNING by default).
Which is why, as mentionned by Simon And Sraw, you have to set your logger's level to at least the lower level you're interested in.
This being said, manual configuration is a PITA when you can use dictConfig
instead.
Oh and yes, a very important thing that is not made clear by the doc (IMHO at least) and often not understood at first is that logging configuration should never be done by your library code but only by the main (entry point) script.
The point here is that you'll need different configs depending on how your code is used. Even for the same project, you will probably want to log everything to sys.stderr for development and log only important things (to files or syslog or other) on production - and let's not talk of pure libs that are to be used by other apps with totally different execution environment and logging needs.
IOW, what you want is:
your library code gets a logger (preferably using __name__
as you did) and uses it but never configures it (well... almost, cf https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library)
your application entry point code configures the loggers (and can of course use one too if needed).
Upvotes: 2
Reputation: 1902
The logger should be configured to accept LogRecords from a certain log level. If no level is set then WARNING
is the default level assumed for root loggers.
So, set logger.setLevel(logging.DEBUG)
so that your logger accepts all the records.
Then in the handlers, you can configure to filter log records from a certain level.
And Btw, if you are going to use the same format for both the logging files, you a same formatter object.
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)s - %(asctime)s - %(levelname)s - %(funcName)s - %(message)s')
err_handler.setFormatter(formatter)
info_handler.setFormatter(formatter)
Upvotes: 0
Reputation: 2136
I believe the trick here is to set the logging level of the logger itself to the lowest level of interest. So set logger.setLevel(logging.INFO)
and you should see your desired behaviour.
Upvotes: 0
Reputation: 20214
Well, log works like a filter. logger
-> handler
. So first you need to ensure your logger's level is at least INFO
.
Add logger.setLevel(logging.INFO)
after logger = logging.getLogger(__name__)
.
Upvotes: 2