Madan
Madan

Reputation: 77

Thread Safe Storage in Django

I'm customizing Django's Admin Email Handler, and adding some burst protection to it so that if several errors are brought up within a minute, only one error email is sent out.

    def burst_protection(self, lag=60):
        """
        :param lag:
        :return:
        """
        current_time = int(time.time())
        global timestamp

        if current_time - timestamp > lag:
            timestamp = current_time
            enable_burst_protection = False
        else:
            enable_burst_protection = True

        return enable_burst_protection

Originally, I implemented timestamp as a class variable, but this doesn't protect from message bursts in our production environment, because I'm assuming there are multiple threads or processes on the server accessing and writing to timestamp at the same time . Is there a thread and process safe way to store the time stamp value in Python/Django?

I've heard that it's possible by storing the timestamp value in a database, but I would prefer to avoid accessing the database for this.

Upvotes: 2

Views: 1168

Answers (2)

thebjorn
thebjorn

Reputation: 27351

Redis is pretty good for rate limiting implementations, eg.:

we need a unique key per user, here we use either the session key or the user's ip-address + a hash of the user agent string, but if you want to rate limit globally then returning a constant (eg. the servicename parameter) will do that:

import hashlib

def _ratelimit_key(servicename, request):
    """Return a key that identifies one visitor uniquely and is durable.
    """
    sesskey = request.session.session_key
    if sesskey is not None:
        unique = sesskey
    else:
        ip = request.get_host()
        ua = request.META.get('HTTP_USER_AGENT', 'no-user-agent')
        digest = hashlib.md5(ua).hexdigest()
        unique = '%s-%s' % (ip, digest)
    return '%s-%s' % (servicename, unique)

then the rate limiter can be implemented as a decorator:

import time
from django.conf import settings
from django import http
import redis

def strict_ratelimit(name, seconds=10, message="Too many requests."):
    """Basic rate-limiter, only lets user through 10 seconds after last attempt.

    Args:
        name: the service to limit (in case several views share a service)
        seconds: the length of the quiet period
        message: the message to display to the user when rate limited

    """
    def decorator(fn):
        def wrap(request, *args, **kwargs):
            r = redis.Redis()
            key = _ratelimit_key(name, request)
            if r.exists(key):
                r.expire(key, seconds)  # refresh timeout
                return http.HttpResponse(message, status=409)
            r.setex(key, seconds, "nothing")
            return fn(request, *args, **kwargs)
        return wrap
    return decorator

Usage:

@strict_ratelimit('search', seconds=5)
def my_search_view(request):
    ...

A strict rate limiter is usually not what you want, however, normally you'd like to allow people to have small bursts (as long as there aren't too many inside a time interval). A "leaky bucket" (google it) algorithm does that (same usage as above):

def leaky_bucket(name, interval=30, size=3, message="Too many request."):
    """Rate limiter that allows bursts.

    Args:
        name:     the service to limit (several views can share a service)
        interval: the timperiod (in seconds)
        size:     maximum number of activities in a timeperiod
        message:  message to display to the user when rate limited

    """
    def decorator(fn):
        def wrap(request, *args, **kwargs):
            r = redis.Redis()
            key = _ratelimit_key(name, request)
            if r.exists(key):
                val = r.hgetall(key)
                value = float(val['value'])
                now = time.time()

                # leak the bucket
                elapsed = now - float(val['timestamp'])
                value -= max(0.0, elapsed / float(interval) * size)

                if value + 1 > size:
                    r.hmset(key, dict(timestamp=now, value=value))
                    r.expire(key, interval)
                    return http.HttpResponse(message, status=409)
                else:
                    value += 1.0
                    r.hmset(key, dict(timestamp=now, value=value))
                    r.expire(key, interval)
                    return fn(request, *args, **kwargs)

            else:
                r.hmset(key, dict(timestamp=time.time(), value=1.0))
                r.expire(key, interval)
                return fn(request, *args, **kwargs)

        return wrap
    return decorator

Upvotes: 1

CodeSamurai777
CodeSamurai777

Reputation: 3355

There is a method but:

To provide thread-safety, a different instance of the cache backend will be returned for each thread.

You can use caching and if you do not want to hit the database you can do an in memory cache storage.

https://docs.djangoproject.com/en/2.2/topics/cache/

Look at the Local-memory caching section

Example (from docs):

settings.py

    'default': {
        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
        'LOCATION': 'unique-snowflake',
    }
}

Then you can use low level caching, same page section The low-level cache API

views.py - or wherever your burst is located

from django.core.cache import caches
cache1 = caches['unique-snowflake'] # we get our cache handle
cache1.set('my_key', time())
...
cache1.get('my_key')

Upvotes: 1

Related Questions