Reputation: 7333
I'm working on a fairly simple CGI with Python. I'm about to put it into Django, etc. The overall setup is pretty standard server side (i.e. computation is done on the server):
I don't think there are going to be hundreds or thousands of people using this at once; however, because the computation going on takes a fair amount of RAM and processor power (each instance forks the most CPU-intensive task using Python's Pool
).
I wondered if you know whether it would be worth the trouble to use a queueing system. I came across a Python module called beanstalkc
, but on the page it said it was an "in-memory" queueing system.
What does "in-memory" mean in this context? I worry about memory, not just CPU time, and so I want to ensure that only one job runs (or is held in RAM, whether it receives CPU time or not) at a time.
Also, I was trying to decide whether
What do you think is the appropriate design methodology for a light traffic CGI for a problem of this sort? Advice is much appreciated.
Upvotes: 1
Views: 409
Reputation: 40223
Definitely use celery. You can run an amqp server or I think you can sue the database as a queue for the messages. It allows you to run tasks in the background and it can use multiple worker machines to do the processing if you want. It can also do cron jobs that are database based if you use django-celery
It's as simple as this to run a task in the background:
@task
def add(x, y):
return x + y
In a project I have it's distributing the work over 4 machines and it works great.
Upvotes: 1