Reputation: 5705
So this a more or less theoretical question.
Lets say we have a multi node Swarm set up consisting of 3 nodes. Now we have installed a Python service which uses Celery and Redis as the message broker. So basically there is also a 3 node replica redis service as a part of the application .
Now since this Redis service is acting as the message broker if we only use the service name to do DNS resolution inside my Python app then how does docker swarm or my application know which redis node would have the task that i have placed in the queue ?
I mean that routing mesh is only going to direct the traffic for a particular service on any one of the nodes that has that service. Now my Python app has launched a task asynchronously and placed the same in the redis queue. So once that is done i want my app to query redis to get the results. But how does it know which node has the results.
Is this somewhat like sticky sessions ? Please let me know if anything is not clear.
Upvotes: 0
Views: 538
Reputation: 15926
We typically handle this use case with redis sentinel sitting in front of your redis cluster to enable automatic failover, and pointing celery to redis-sentinel as the broker (it's fairly simply in celery 4.2.0). If you are comfortable with doing and handling manual redis failover in your cluster, then you just point celery at the redis service.
Upvotes: 1