Reputation: 11
Background: Now our program uses Celery+Redis as the asynchronous task framework, and Python-compiled worker as the task executor. Configuration:
`task_ignore_result=True,
worker_max_tasks_per_child=50,
worker_max_memory_per_child=60000,
broker_connection_retry=False,
redis_max_connections=50,
broker_pool_limit=50,
broker_heartbeat=600,
broker_heartbeat_checkrate=5.0,
worker_disable_rate_limits=True,
worker_lost_wait=60.0,
broker_connection_timeout=10,
task_serializer= 'json',
accept_content=['json'],
timezone='UTC',
worker_prefetch_multiplier=0,
forked_by_multiprocessing=1,
Startup parameter:
"celery multi start default_worker"
" -A vna.common.task.internal.engine.app"
" --hostname=default_worker"
" --suffix=default"
" --loglevel=DEBUG"
" --logfile="
"varloggalaxenginelogvna-apicelery-worker.log"
" --pidfile=optvnadefault-worker.pid"
" -P prefork --autoscale=20,5 -Ofair -Q default"`
Problem: Send the task to the Redis queue. The broker then pulled the task out of the redis, but by the time my worker started executing the task, more than 3 seconds had passed. This means that there is a delay of more than 3 seconds between task fetching and execution. Analysis: After multiple experiments, the root cause is that the memory RSS usage exceeds 60 MB during the execution of the subtask process. As a result, the subprocess is restarted. Requirement: What we wonder is
We have configured a maximum of 20 subprocesses and 10 core processes. Why are the processes that execute tasks always in one or two of them and the other processes do not execute tasks? Is there a way to optimize the scheduling policy so that tasks are evenly distributed to all subprocesses? Why does the subprocess perfork take more than 3s? What operations does the subprocess perform during perfork? If a subprocess is destroyed and rebuilt, can the task be executed by other subprocesses instead of waiting for the perfork completion of the subprocess?
how can we avoid celery Worker(master) prefork stop time?
Upvotes: 1
Views: 6