Reputation: 365
I recently met a logging issue about set logging level. I have a demo django project, which is to test set log level. below is the page:
The log will be written to /tmp/run.log.
When I deploy it with gunicoryn + nginx(proxy static file), and has 4 gunicorn workes. Set log level only have effect to one of the workers:
Above the two pictures, I set log level to ERROR
, but only effect worker 74096.
Here are some information and Django code.
System Info:
System: Centos 7.4 x64
Python: 2.7.5
Django: 1.11.2
Gunicorn: 19.7.1
Django logging config:
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'verbose': {
'format': '[%(asctime)s] %(levelname)s [%(filename)s:%(lineno)s] %(message)s'
}
},
'handlers': {
'file': {
'level': 'DEBUG',
'class': 'logging.handlers.RotatingFileHandler',
'maxBytes': 1024,
'backupCount': 5,
'filename': '/tmp/run.log',
'formatter': 'verbose'
},
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'verbose'
set log level function
name = "django_docker"
bind = "unix:/var/run/django_docker.sock"
worker_class = "egg:meinheld#gunicorn_worker"
#workers = multiprocessing.cpu_count() * 2 + 1
workers = 4
reload = True
umask = 0002
user = 'nginx'
group = 'nginx'
accesslog = "/tmp/gunicorn.access.log"
errorlog = "/tmp/gunicorn.error.log"
raw_env = ["DJANGO_SETTINGS_MODULE=django_docker.settings"]
chdir = " /home/user/workspace/django_docker/"
pidfile = "/var/run/gunicorn.pid"
daemon = True
logconfig_dict = {
'version':1,
'disable_existing_loggers': False,
'loggers':{
"root": {"level": "INFO", "handlers": ["console"]},
"gunicorn.error": {
"level": "INFO",
"handlers": ["error_file"],
"propagate": 1,
"qualname": "gunicorn.error"
},
"gunicorn.access": {
"level": "INFO",
"handlers": ["access_file"],
"propagate": 0,
"qualname": "gunicorn.access"
}
},
'handlers':{
"console": {
"class": "logging.StreamHandler",
"formatter": "generic",
"stream": "sys.stdout"
},
"error_file": {
"class": "logging.FileHandler",
"formatter": "generic",
"filename": "/tmp/gunicorn.error.log"
},
"access_file": {
"class": "logging.handlers.RotatingFileHandler",
"maxBytes": 1024*1024,
"backupCount": 5,
"formatter": "generic",
"filename": "/tmp/gunicorn.access.log",
}
},
'formatters':{
"generic": {
"format": "%(asctime)s [%(process)d] [%(levelname)s] %(message)s",
"datefmt": "[%Y-%m-%d %H:%M:%S %z]",
"class": "logging.Formatter"
},
"access": {
"format": "%(message)s",
"class": "logging.Formatter"
}
}
}
Also, I have try to deploy with uwsgi and 4 workers, Also have the problem.
Could anyone help with me, thanks.
Upvotes: 2
Views: 1048
Reputation: 73
when you run gunicorn with 4 workers, it actually creates 4 separate django applications for you.
then, when a request comes to gunicorn, it passes the request to just 1 of your 4 django applications.(for example for balancing the load on 4 django apps so one is not under pressure and other 3 sit freely without work)
so when you are accessing the 8001 port of your host and sending a request to server, just one of the 4 apps receive it and process it for its own. other 3 apps have their own separate memory and dont see the setting of other app.
you can check this behavior by setting a variable in your app and then getting it back. for instance you have i=0 as default and then you set i=1 in one app and then request a page which contains the i value. because maybe another app has handled the i=1 command, you have a chance to see value for i is 0. try refreshing the page and you have a chance to see i is 1. each time a different app is answering your request with its own memory and data.
as a solution you can use a memory which is external to django apps. for example store your setting or other params in a redis db. when you need that setting, get it from redis and when you want to change it, change it on redis. this way, all 4 apps are using a shared redis db memory.
Upvotes: 2