Reputation: 45
I have changed the numeric value of log levels using addLevelName() and then put logs in my modules but in the log file the log.error is giving me level name as DEBUG. Here is the piece of code that am trying to
class LogAttribute:
def __init__(self):
logger = logging.getLogger()
logging.addLevelName(50, "ERROR")
logging.addLevelName(40, "DEBUG")
logging.addLevelName(30, "WARNING")
logging.addLevelName(20, "INFO")
logging.addLevelName(10, "VERBOSE")
check = logging.getLevelName(40)
logger.setLevel(config_obj["loggerLevel"])
output_dir = (os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
filename = datetime.datetime.now().strftime("%Y%m%d-%H%M%S")+"_EP_script.log"
handler = logging.FileHandler(os.path.join(output_dir, filename))
formatter = logging.Formatter(" %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger.addHandler(handler)
Upvotes: 0
Views: 1621
Reputation: 1122172
The logging
module is not set up for arbitrary re-assignment of the standard logging levels. The logging.addLevelName()
method is really only meant to add new levels, not for adjusting existing levels.
What happens under the hood is that logging.error()
uses the module-level constant logging.ERROR
to log the error message. That constant has been set to 40
, a numeric value you just told the module to map to the string 'DEBUG'
.
If you really must re-map all the levels, you also need to reassign the constants. Because Python is a dynamic language, that is certainly possible:
logging.ERROR = 50
However, I strongly advice you not to do this. There may be 3rd-party frameworks that rely on the constants to stay, well, constant.
If you are trying to adjust how a third-party library logging, you have better options. Each log message includes a logger name, and names with a .
in them form a hierarchy, so a logger name of foo.bar.baz
is seen as a child of foo.bar
and foo
, letting you adjust logging for child logging nodes by settings on a parent node. See the Logger objects documentation intro for details and how to configure these.
Even if the per-logger-object and per-hierarchy configuration options do not fit your specific use case, you can almost certainly monkeypatch such a module to replace the logger object with a custom wrapper based on the LoggerAdapter
pattern. That's because the standard, best practice method of logging in a 3rd party library is to create a top-level logger
object and apply all logging to that object. You can replace that object with a wrapper:
import logging
level_map = {
logging.ERROR: logging.CRITICAL,
logging.DEBUG: logging.ERROR
}
class RemappingLogger(logging.LoggerAdapter):
def __init__(self, logger, extra=None):
# make the extra parameter optional
if extra is None:
extra = {}
super().__init__(logger, extra)
def log(self, lvl, *args, **kwargs):
lvl = level_map.get(lvl, lvl)
super().log(lvl, *args, **kwargs)
import somelibrary
import somelibrary.submodule
somelibrary.logger = RemappingLogger(somelibrary.logger)
somelibrary.submodule.logger = RemappingLogger(somelibrary.submodule.logger)
You can use the same pattern to filter specific messages; it may be sufficient to provide a custom LoggingAdapter.process()
method in that case.
Upvotes: 3