Kariem
Kariem

Reputation: 4873

Properly handle timeout on CloudRun

We use Google Cloud Run to wrap an analysis developed in R behind a web API. For this, we have a small Fastify app that launches an R script and uploads the results to Google Cloud Storage. The process' stdout and stderr are written to a file and are also uploaded at the end of the analysis.

However, we sometimes run into issues when a process takes longer to execute than expected. In these cases, we fail to upload anything and it's difficult to debug, because stdout and stderr are "lost" on the instance. The only thing we see in the Cloud Run logs is this message

The request has been terminated because it has reached the maximum request timeout

Is there a recommended way to handle a request timeout?

In App Engine there used to be a descriptive error: DeadllineExceededError for Python and DeadlineExceededException for Java.

We currently evaluate the following approach

This feels a little complicated so any feedback very appreciated.

Upvotes: 2

Views: 956

Answers (1)

Wesley LeMahieu
Wesley LeMahieu

Reputation: 2613

Since the default timeout is 5 minutes and can extend up to 60 minutes, I would simply start by increasing this to 10 minutes. Then observe over the course of a month how that affects your service.

Aside from that fix, I would start investigating why your process is taking longer than expected and if it's perhaps due to a forever-growing result set.

If there's no result set scalability concern, then bumping the default timeout up from 5-minutes seems to be the most reasonable and simple fix. It would only be a problem until your script has to deal with more data in the future for some reason.

Upvotes: -1

Related Questions