Installation

Create or use a compatible cache

django_ratelimit requires a cache backend that

  1. Is shared across any worker threads, processes, and application servers. Cache backends that use sharding can be used to help scale this.
  2. Implements atomic increment.

Redis and Memcached backends have these features and are officially supported. Backends like local memory and filesystem are not shared across processes or servers. Notably, the database backend does not support atomic increments.

If you do not have a compatible cache backend, you’ll need to set one up, which is out of scope of this document, and then add it to the CACHES dictionary in settings.

Warning

Without atomic increment operations, django_ratelimit will appear to work, but there is a race condition between reading and writing usage count data that can result in undercounting usage and permitting more traffic than intended.

Configuration

django_ratelimit has reasonable defaults, and if your default cache is compatible, and your application is not behind a reverse proxy, you can skip this section.

For a complete list of configuration options, see Settings.

Cache Settings

If you have added an additional CACHES entry for ratelimiting, you’ll need to tell django_ratelimit to use this via the RATELIMIT_USE_CACHE setting:

# your_apps_settings.py
CACHES = {
    'default': {},
    'cache-for-ratelimiting': {},
}

RATELIMIT_USE_CACHE = 'cache-for-ratelimiting'

Reverse Proxies and Client IP Address

django_ratelimit reads client IP address from request.META['REMOTE_ADDR']. If your application is running behind a reverse proxy such as nginx or HAProxy, you will need to take steps to ensure you have access to the correct client IP address, rather than the address of the proxy.

There are security risks for libraries to assume how your network is set up, and so django_ratelimit does not provide any built-in tools to address this. However, the Security chapter does provide suggestions on how to approach this.

Enforcing Ratelimits

The most common way to enforce ratelimits is via the ratelimit decorator:

from django_ratelimit.decorators import ratelimit

@ratelimit(key='user_or_ip', rate='10/m')
def myview(request):
    # limited to 10 req/minute for a given user or client IP

# or on class methods
class MyView(View):
    @method_decorator(ratelimit(key='user_or_ip', rate='1/s'))
    def get(self, request):
        # limited to 1 req/second