Reputation: 1209
I am running Celery in Kubernetes pod. It can't find the server:
ERROR/MainProcess] consumer: Cannot connect to redis://:**@redis-master:6379/1: Error -3 connecting to redis-master:6379. Lookup timed out.. Trying again in 4.00 seconds... (1/100)
If I connect to the very same pod via "kubectl exec -it" and run the command, I succeed:
redis-cli -u redis://:@redis-master:6379/1 keys '*'
(empty list or set)
How can I troubleshoot this problem?
UPDATE 1: Problem obviously in DNS:
If I set host to domain name:
export REDIS_HOST=redis-master.dev.svc.cluster.local
celery worker --app src
TIMEOUT
If I set host to domain name:
export REDIS_HOST=10.0.13.13
celery worker --app src
OK
Meanwhile:
# dig redis-master.dev.svc.cluster.local
; <<>> DiG 9.11.5-P4-5.1+deb10u1-Debian <<>> redis-master.dev.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49071
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;redis-master.dev.svc.cluster.local. IN A
;; ANSWER SECTION:
redis-master.dev.svc.cluster.local. 30 IN A 10.0.13.13
;; SERVER: 10.0.0.10#53(10.0.0.10)
Thus, the problem is narrowed down to: Why Celery doesn't use DNS?
UPDATE 2: Problem in Python library dnspython. Version 2.0.0 - bug in resolving Version 1.16.0 - works like a charm
SOLUTION
pip install dnspython==1.16.0
Upvotes: 0
Views: 1046
Reputation: 1209
Answering myself:
This is a dnspython bug. Solution:
pip install dnspython==1.16.0
Upvotes: 4