PJernlund
PJernlund

Reputation: 53

How do I specify which logging configuration to use from a json file

I'm trying to create a wrapper class for the python logging library. The idea is that a user can provide a name of a logger to the constructor and have the instance configured based on the contents of a json file where the name provided is a key for a corresponding sections of the config file.

Here is my code

class LogLib:
    def __init__(self, logger_name=""):
        conf_dict = Config.get("LogLib")
        logging_config.dictConfig(conf_dict)
        if not logger_name:
            self.logger = logging.getLogger()
        else:
            self.logger = logging.getLogger(logger_name)

    def debug(self, message, db=False):
        caller_info = self.get_callerInfo()
        msg = message + caller_info
        self.logger.debug(msg)
        if db:
            self.log_db(message, "DEBUG")

    def info(self, message, db=False):
        caller_info = self.get_callerInfo()
        msg = message + caller_info
        self.logger.info(msg)
        if db:
            self.log_db(message, "INFO")

    def warning(self, message, db=False):
        caller_info = self.get_callerInfo()
        msg = message + caller_info
        self.logger.warning(msg)
        if db:
            self.log_db(message, "WARNING")

    def error(self, message, db=False, stacktrace=False):
        caller_info = self.get_callerInfo()
        msg = message + caller_info
        self.logger.error(msg, exc_info=stacktrace)
        if db:
            self.log_db(message, "ERROR")

    def critical(self, message, db=False, stacktrace=False):
        caller_info = self.get_callerInfo()
        msg = message + caller_info
        self.logger.critical(msg, exc_info=stacktrace)
        if db:
            self.log_db(message, "CRITICAL")

    def log_db(self, message, level):
        raise NotImplemented()
        # psql = PostgresqlConnector()
        # with psql.create_session() as session:
        #    psql.insert(session, Log(message=message, level=level))

    def get_callerInfo(self):
        raw = self.logger.findCaller(stack_info=False)
        caller_fileName = raw[0].rsplit("/", 1)[1].split(".")[0]
        return f"\nSOURCE > {caller_fileName}.{raw[2]}, Line: {raw[1]}\n"

To do some testing I added a small main() to the bottom of the LogLib file, outside of the class. It looks like this:

    def main():
        logger = LogLib(logger_name="db_logger")
        logger.debug("Debug test - This should not show up in the file.")
        logger.info("Info test")
        logger.warning("Warning test")
        logger.error("Error test")

    if __name__ == "__main__":
        main()

To configure this wrapper, I created a config section in JSON format which is then fetched and used in the _ _ init _ _. The Config looks like this:

"LogLib": {
    "version": 1,
    "root": {
        "handlers": ["console", "file"],
        "level": "DEBUG"
    },
    "db_logger": {
        "handlers": ["db_file"],
        "level": "INFO"
    },
    "handlers": {
        "console": {
            "formatter": "console_formatter",
            "class": "logging.StreamHandler",
            "level": "WARNING"
          },
        "file": {
          "formatter": "file_formatter",
          "class": "logging.FileHandler",
          "level": "DEBUG",
          "filename": "C:\\Users\\name\\Documents\\GitHub\\proj\\logs\\app_err.log"
        },
        "db_file": {
            "formatter": "file_formatter",
            "class": "logging.FileHandler",
            "level": "INFO",
            "filename": "C:\\Users\\name\\Documents\\GitHub\\proj\\logs\\db.log"
        }
      },
    "formatters": {
        "console_formatter": {
          "format": "%(asctime)s [%(levelname)s] > %(message)s",
          "datefmt": "%d/%m/%Y-%I:%M:%S"
        },
        "file_formatter": {
          "format": "%(asctime)s [%(levelname)s] > %(message)s",
          "datefmt": "%d/%m/%Y-%I:%M:%S"
        }
      }
}

The root logger works fine in how it's been configured (writes to the app_err.log and prints to console for the given levels) but when I try to provide it with the name "db_logger" it does not work and defaults to root regardless.

What I want is when a user provides a name to the constructor via the parameter "logger_name", it's supposed to check the config for a logger with that name and apply the configuration specified for that name to the LogLib instance. In this case, I want all logging messages of level INFO or higher to be sent to a file called db.log without any console output.

Upvotes: 1

Views: 1386

Answers (1)

saaj
saaj

Reputation: 25224

I think a logging wrapper in general is a bad design, and I've seen many of them in both commercial and OSS codebases. I would call it an anti-pattern in Python because logging package is actually very extensible, and it's rare to outgrow its (extension) mechanisms.

The logging library takes a modular approach and offers several categories of components: loggers, handlers, filters, and formatters.

  • Loggers expose the interface that application code directly uses.
  • Handlers send the log records (created by loggers) to the appropriate destination.
  • Filters provide a finer grained facility for determining which log records to output.
  • Formatters specify the layout of log records in the final output.

And some points specifically about the snippet.

  1. Unless you want to change the JSON config file in runtime, there's no point in calling logging.config.dictConfig per logger. Call it once on application bootstrap.

  2. If you do want to change logging configuration in runtime, note that you likely need to set disable_existing_loggers=False. There's a warning about it in the documentations:

    The fileConfig() function takes a default parameter, disable_existing_loggers, which defaults to True for reasons of backward compatibility. This may or may not be what you want, since it will cause any non-root loggers existing before the fileConfig() call to be disabled unless they (or an ancestor) are explicitly named in the configuration. Please refer to the reference documentation for more information, and specify False for this parameter if you wish.

    The dictionary passed to dictConfig() can also specify a Boolean value with key disable_existing_loggers, which if not specified explicitly in the dictionary also defaults to being interpreted as True. This leads to the logger-disabling behaviour described above, which may not be what you want - in which case, provide the key explicitly with a value of False.

    Incremental Configuration also has this warning:

    [...] there is not a compelling case for arbitrarily altering the object graph of loggers, handlers, filters, formatters at run-time, once a configuration is set up; the verbosity of loggers and handlers can be controlled just by setting levels (and, in the case of loggers, propagation flags). Changing the object graph arbitrarily in a safe way is problematic in a multi-threaded environment; while not impossible, the benefits are not worth the complexity it adds to the implementation.

    If you still want to proceed logging.config.listen can be an inspiration.

  3. You don't need get_callerInfo. LogRecord has these out of the box: filename, module, lineno, funcName. To expose them in your logs reference them in the format string or subclass logging.Formatter for you want to have some formatting logic.

  4. Want to write log records to a new medium, say Postgres, (not already supported by stdlib logging.handlers or 3rd party packages)? Write a subclass of logging.Handler. Also note that logging to the production database may be tricky. A couple examples: logbeam (logging handler for AWS CloudWatch Logs) and Chronologer (Python client/server logging system writing in MySQL and SQLite, I wrote).

Upvotes: 2

Related Questions