Reputation: 416
I am trying to use Django cache to implement a lock mechanism. In Celery offical site, it claimed Django cache work fine for this. However, in my experence, it did not work. My experience is that if there are multiple threads/processes acquire the lock in almost the same time (close to ~0.003 second), all threads/processes will get the lock successfully. For other threads which acquire lock later than ~0.003 second, it fails.
Am I the only person experienced this? Please correct me if possible.
def acquire(self, block = False, slp_int = 0.001):
while True:
added = cache.add(self.ln, 'true', self.timeout)
if added:
cache.add(self.ln + '_pid', self.pid, self.timeout)
return True
if block:
sleep(slp_int)
continue
else:
return False
# Set Django backend cache to localcache
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/dev/shm/django_cache',
}
}
Upvotes: 4
Views: 1702
Reputation: 151401
The problem is that Django makes no guarantees as to the atomicity of .add()
. Whether or not .add()
will in fact be atomic depends on the backend you are using. With a FileBasedCache
, .add()
is not atomic:
def add(self, key, value, timeout=DEFAULT_TIMEOUT, version=None):
if self.has_key(key, version):
return False
self.set(key, value, timeout, version)
return True
Worker A executing .add()
could be preempted after self.has_key(...)
but before self.set(...)
. Worker B executing .add()
in one shot would successfully set the key and return True
. When worker A resumes, it would also set the key and return True
.
This issue report indicates that the example code you looked at assumes that the backend is Memcached. If you use Memcached or a backend that supports an atomic .add()
then it should work.
Upvotes: 5