Reputation: 95
im using python 2.6, cherrypy 3.1
i have some problem with timeout of the requests. I simply need to be requests done in Limit (30 seconds). after that limit, the processes should be killed and answer returned
server is starting through tree.mount(); cherrypy.start(); cherrypy.block()
as first thing... when i try to kill app (by Ctrl+C (debian 6.0)), app is stuck on:
Waiting for child threads to terminate...
how to kill the processes on exit and how to handle the timeout connection for killin the process which is not responding?
I cant write here any code, because it is strictly proprietary, anyway, i hope that someone have solved this problem yet.
Greetings Martin
Upvotes: 3
Views: 4223
Reputation: 25234
As of CherryPy 12 _TimeoutMonitor
is removed. See the pull request for its source, if you still inclined to try the hack below.
Caution. It is very likely that you will mess things up if you carelessly kill threads or processes as part of your normal execution flow. In case of Python code except
and finally
won't execute, you'll have memory and synchronization issues (e.g. deadlocks), unclosed file handles and sockets, and so on and so forth. Killing is a solution to your problem as much as a guillotine might be called a cure for dandruff.
In general you can not do it. There's just no such API. You can find some hacks for some cases but in general you can't. The only stable way to terminate a thread is to cooperate with it by flags, events, etc.
When you press Ctrl+C in terminal that owns CherryPy Python process the interpreter receives SIGINT
signal and raises KeyboardInterrupt
exception. Then CherryPy commands its worker threads to stop and tells you Waiting for child threads to terminate.... If a worker thread is blocked in user code, CherryPy will wait until it is released.
To force-stop you can uses common kill -9 PID
, where PID
is process id of your CherryPy process. On occasion you can find it with any process monitor. Or incorporate cherrypy.process.plugins.PIDFile
to write a pidfile of the process.
In general CherryPy is a threaded server. If a dozen-second response is common for your tasks you can easily run out of worker threads. In such a case background task queue can be a good idea (Celery, Rq) or at least some use of cherrypy.process.plugins.BackgroundTask
. But it obviously makes you redesign your system, make temporary result storage, response polling or push. It adds complexity, so the decision should be made weighting all pros and cons.
If you can limit the execution on the end which does processing or calculation you're better do it there. Be it database, or web-service API call, or whatnot and then just handle a timeout exception.
There is response timeout feature in CherryPy out of the box. It is controlled by response.timeout
configuration and cherrypy._TimeoutMonitor
which runs in a separate thread and looks whether a response has timed out. Though in fact the monitor only sets an attribute, response.timed_out
, which is later looked at in cherrypy._cprequest.Request.run
and if it's true cherrypy.TimeoutError
is raised. So the timeout exception is raised post factum. If your page handler blocks for 30 seconds then you will get the exception only after 30 seconds.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import time
import cherrypy
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 8080,
'server.thread_pool' : 8,
# interval in seconds at which the timeout monitor runs
'engine.timeout_monitor.frequency' : 1
},
'/' : {
# the number of seconds to allow responses to run
'response.timeout' : 2
}
}
class App:
@cherrypy.expose
def index(self):
time.sleep(8)
print('after sleep')
return '<em>Timeout test</em>'
if __name__ == '__main__':
cherrypy.quickstart(App(), '/', config)
You can't kill a thread, but you can kill a process. If you can't control your tasks in any other way, you can wrap the execution in process and use a monitor to kill it when it runs out of time. The following demonstrates the idea.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import signal
import time
from multiprocessing import Process
import cherrypy
config = {
'global' : {
'server.socket_host' : '127.0.0.1',
'server.socket_port' : 8080,
'server.thread_pool' : 8,
# disable internal timeout monitor
'engine.timeout_monitor.on' : False,
# interval in seconds at which the timeout monitor runs
'engine.timeout_kill_monitor.frequency' : 1
},
'/' : {
# the number of seconds to allow responses to run
'response.timeout' : 2
}
}
class TimeoutKillMonitor(cherrypy._TimeoutMonitor):
def run(self):
cherrypy._TimeoutMonitor.run(self)
for request, response in self.servings:
if response.timed_out and hasattr(response, 'jobProcess'):
if response.jobProcess.is_alive():
os.kill(response.jobProcess.pid, signal.SIGKILL)
cherrypy.engine.timeout_kill_monitor = TimeoutKillMonitor(cherrypy.engine)
cherrypy.engine.timeout_kill_monitor.subscribe()
def externalJob():
time.sleep(8)
class App:
@cherrypy.expose
def index(self):
p = Process(target = externalJob)
p.start()
cherrypy.response.jobProcess = p
p.join()
# To determine whether your job process has been killed
# you can use ``killed = response.timed_out``. Reset it to
# ``False`` to avoid ``TimeoutError`` and response a failure
# state in other way.
return '<em>Result</em>'
if __name__ == '__main__':
cherrypy.quickstart(App(), '/', config)
Note that you can't use multiprocessing.Queue
or multiprocessing.Pipe
to communicate with your worker process because when it is killed, access to any of them will lock your CherryPy thread. Here's the quote from Python documentation for Process.terminate
.
If this method is used when the associated process is using a pipe or queue then the pipe or queue is liable to become corrupted and may become unusable by other process. Similarly, if the process has acquired a lock or semaphore etc. then terminating it is liable to cause other processes to deadlock.
So yes, it is technically possible to force the timeout but it's a discouraged and error-prone way.
Upvotes: 3