Reputation: 4199
I have created an app using GAE. I am expecting 100k request daily. At present for each request app need to lookup 4 tables and 8 diff columns before performing needed task.
These 4 tables are my master tables having 5k,500, 200 and 30 records. It is under 1 MB (The limit).
Now I want to put my master records in memcache for faster access and reduce RPC call. When any user update master I'll replace the memcache object.
I need community suggestion about this.
Is it OK to change the current design?
How can I put 4 master table data in memcache?
Here is how application works currently
department
) and checks for p1 existence. If exists check enable status. Upvotes: 4
Views: 1470
Reputation: 101139
You shouldn't be thinking in terms of inserting tables into memcache. Instead, use an 'optimistic cache' strategy: any time you need to perform an operation that you want to cache, first attempt to look it up in memcache, and if that fails, fetch it from the datastore, then store in memcache. Here's an example:
def cached_get(key):
entity = memcache.get(str(key))
if not entity:
entity = db.get(key)
memcache.set(str(key), entity)
return entity
Note, though, that caching individual entities is fairly low return - the datastore is fairly fast at doing fetches. Caching query results or rendered pages will give a much better improvement in speed.
Upvotes: 6