Reputation: 46513
Let's say we have an application based on Bottle like this:
from bottle import route, run, request, template, response
import time
def long_processing_task(i):
time.sleep(0.5) # here some more
return int(i)+2 # complicated processing in reality
@route('/')
def index():
i = request.params.get('id', '', type=str)
a = long_processing_task(i)
response.set_header("Cache-Control", "public, max-age=3600") # does not seem to work
return template('Hello {{a}}', a=a) # here in reality it's: template('index.html', a=a, b=b, ...) based on an external template file
run(port=80)
Obviously going to http://localhost/?id=1, http://localhost/?id=2, http://localhost/?id=3, etc. takes at least 500 ms per page for the first loading.
How to make subsequent loading of these pages are faster?
More precisely, is there a way to have both:
client-side caching: if user A has visited http://localhost/?id=1 once, then if user A visits this page a second time, it will be faster
server-side caching: if user A has visited http://localhost/?id=1 once, then if user B visits this page later (for the first time for user B!), it will be faster too.
In other words: if 500 ms is spent to generate http://localhost/?id=1 for one user, it will be cached for all future users requesting the same page. (is there a name for that?)
?
Notes:
In my code response.set_header("Cache-Control", "public, max-age=3600")
does not seem to work.
In this tutorial, it is mentioned about template caching:
Templates are cached in memory after compilation. Modifications made to the template files will have no affect until you clear the template cache. Call bottle.TEMPLATES.clear() to do so. Caching is disabled in debug mode.
but I think it's not related to caching of the final page ready to send to the client.
Upvotes: 0
Views: 998
Reputation: 18148
Server Side
You want to avoid calling your long running task repeatedly. A naive solution that would work at small scale is to memoize long_processing_task
:
from functools import lru_cache
@lru_cache(maxsize=1024)
def long_processing_task(i):
time.sleep(0.5) # here some more
return int(i)+2 # complicated processing in reality
More complex solutions (that scale better) involve setting up a reverse proxy (cache) in front of your web server.
Client Side
You'll want to use response headers to control how clients cache your responses. (See Cache-Control
and Expires
headers.) This is a broad topic, with many nuanced alternatives that are out of scope in an SO answer - for example, there are tradeoffs involved in asking clients to cache (they won't get updated results until their local cache expires).
An alternative to caching is to use conditional requests: use an ETag
or Last-Modified
header to return an HTTP 304
when the client has already received the latest version of the response.
Here's a helpful overview of the various header-based caching strategies.
Upvotes: 1