Amy Obrian
Amy Obrian

Reputation: 1243

Server Error and DeadlineExceededError

I have a basic application. I use twitter api 1.1 and python. While I am running at local I get no error, but after deployment I got DeadlineExceededError Error. Here is log msj:

Traceback (most recent call last):
  File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 266, in Handle
    result = handler(dict(self._environ), self._StartResponse)
  File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
    rv = self.router.dispatch(request, response)
  File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
    return route.handler_adapter(request, response)
  File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
    return handler.dispatch()
  File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
    return method(*args, **kwargs)
  File "/base/data/home/apps/s~tweetllrio/1.370638782538988919/main.py", line 52, in post
    ''+username+'&max_id='+str(max_id)+'&count=200')
  File "libs/oauth2/__init__.py", line 676, in request
    uri = req.to_url()
  File "libs/oauth2/__init__.py", line 421, in to_url
    query = parse_qs(query)
  File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urlparse.py", line 382, in parse_qs
    for name, value in parse_qsl(qs, keep_blank_values, strict_parsing):
  File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urlparse.py", line 423, in parse_qsl
    name = unquote(nv[0].replace('+', ' '))
  File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urlparse.py", line 337, in unquote
    if _is_unicode(s):
DeadlineExceededError

This is main.py

class Search(webapp2.RequestHandler):

    def post(self):
        username = self.request.get("contenta")
        word = self.request.get("contentc")
        header, response = client.request(
            'https://api.twitter.com/1.1/statuses/user_timeline'
            '.json?include_entities=true&screen_name='+username+'&count=1')
        name = json.loads(response)[0]["user"]["name"]
        image = json.loads(response)[0]["user"]["profile_image_url"]
        max_id = json.loads(response)[0]["id"]
        count = 0
        tweets = []
        while count < 18:
            header, response = client.request(
                'https://api.twitter.com/1.1/statuses/user_timeline'
                '.json?include_entities=true&include_rts=false&screen_name='
                ''+username+'&max_id='+str(max_id)+'&count=200')
            for index in range(len(json.loads(response))-1):
                if word in json.loads(response)[index]["text"]:
                    tweets.append(json.loads(response)[index]["text"])
            max_id = json.loads(response)[len(json.loads(response))-1]["id"]
            count += 1

        template = JINJA_ENVIRONMENT.get_template('index.html')
        self.response.write(template.render(
            {"data": tweets[::-1], "name": name, "image": image, "da":len(tweets)})
        )
class MainPage(webapp2.RequestHandler):

    def get(self):

        template = JINJA_ENVIRONMENT.get_template('index.html')
        self.response.write(template.render({}))

application = webapp2.WSGIApplication([
    ('/', MainPage),
    ('/search', Search),
    ('/add', AddUSer),
], debug=True)

Please can you help me? If you want to see any codes please just tell me.

Upvotes: 0

Views: 238

Answers (2)

dragonx
dragonx

Reputation: 15143

The problem is that your overall request takes more than 60 seconds to complete. This isn't because you use a urlfetch - that usually times out within a few seconds, and if it times out you can handle the error well within your 60s limit.

The problem really is the fact that you're issuing 18 urlfetch requests. Since each request can take a couple of seconds, it's really easy for this to add up and hit the 60s limit.

You probably need to rearchitect your main.py and do the actual URL fetches in a Task Queue, and store the result in the datastore. Task Queues can run longer.

You'll need a second handler of some sort to check the status of the task after Search returns.

Upvotes: 0

Prahalad Deshpande
Prahalad Deshpande

Reputation: 4767

As mentioned in the comment by Wooble this Stack Overflow question contains a possible answer to the DeadlineExceededError you see.

I will try to however explain the answer so that it helps you resolve your problem.

You fetch internet resources on the App Engine using the normal Python libraries urllib, urllib2 and httplib. However, on the Google App Engine, these libraries fetch Internet resources using the Google URL Fetch service. This means some other set of servers (other than the one actually hosting your application) will fetch the data for you.

When fetching resources on the App engine using the URL Fetch service, if the request does not complete within the stipulated deadline (either application-specified or a default of 60 s), then a DeadlineExceededException is thrown.

To quote from Dealing with DeadlineExceededError

Making requests to external URLs using URLFetch can also produce DeadlineExceededErrors if the target website is having performance issues or normally takes more than 60 seconds to reply. The logged stack trace of the DeadlineExceededErrors should contain calls to the URLFetch libraries in these cases.

It may be that the twitter API request is not completing within the stipulated deadline. Try one of the following:

  1. Fetch the twitter resource in an asynchronous fashion.
  2. Specify an explicit deadline which is greater than 60 seconds (like 120s) and check if the request completes successfully. I would not recommend this approach as this is purely contextual to the scenario where the application runs and is based more on trial and error techniques.

Upvotes: 1

Related Questions