Askar
Askar

Reputation: 523

Error code 503 in GCP pubsub.v1.Subscriber.StreamingPull

I am trying to utilize pub/sub service and I noticed in my dashboard following error code.

enter image description here

Here link what is code 503

Is there anything that allow me to prevent that?

-Askar

Upvotes: 1

Views: 2531

Answers (3)

Febin
Febin

Reputation: 181

StreamingPull has a 100% error rate.

StreamingPull streams are always terminated with a non-OK status(HTTP 503). Note that, unlike in regular RPCs, the status here is simply an indication that the stream has been broken, not that requests are failing.

https://cloud.google.com/pubsub/docs/streamingpull-troubleshooting#streamingpull_has_a_100_error_rate

Upvotes: 1

Titu
Titu

Reputation: 212

I was getting the same errors when subscribing. It stopped after I set the timeout.

timeout = 600 # it was previously None
# Wrap subscriber in a 'with' block to automatically call close() when done.
    with subscriber:
        try:
            # When `timeout` is not set, result() will block indefinitely,
            # unless an exception is encountered first.
            listener_streaming_pull_future.result(timeout=timeout)
        except TimeoutError:
            listener_streaming_pull_future.cancel()  # Trigger the shutdown.
            listener_streaming_pull_future.result()  # Block until the shutdown is complete.

Upvotes: 0

dsesto
dsesto

Reputation: 8178

As explained in the documentation link about Error Codes that you shared, the HTTP code 503 ("UNAVAILABLE") is returned when the Pub/Sub service was not able to process a request. In general, one could say that these types of errors tend to be transient, and there's no way to avoid them, you can just work around them following a retry strategy such as the one I will comment shortly.

The Google Cloud Pub/Sub SLA shows the guaranteed uptime for this service. As you can see, it is not 100%, as transient errors may happen, which should not disturb your service greatly, considering that you follow the recommended practice of implementing a retry strategy with exponential backoff.

This documentation page shows an example implementation of an Exponential Backoff retry strategy. This example is for Google Cloud Storage, but it can (and should) be applied to any other similar service. It consists in retrying the failed Pub/Sub requests with an increasing backoff in order to increase the probability of a request being successful. This is a recommended best practice and the recommended approach to overcome transient issues.

Upvotes: 3

Related Questions