Reputation: 189
I know that this code shows up when my application’s rate limit has been exhausted, so I'm wondering if anyone can provide any input on what's a good frequency of requests per minute/hour/day. Also, right now my app runs the following code every time I start the program:
for tweet in tweepy.Cursor(api.search, search, count=100, tweet_mode='extended').items(1000):
I do a try/except where I like/rt/comment on certain tweets that contain certain criteria, and the except clause catches the error that returns when a tweet has been liked/rt/commented on already. Does this count as a request? and if so am I wasting it on an exception error catch?it's successful in liking/rt-ing/commenting, I give the program a 90sec timeout (time.sleep(90)), but I'm guessing that's not enough? Sorry for so many questions I'm not sure how else to put it
Upvotes: 1
Views: 776
Reputation: 987
So this doesn't answer your twitter api specific questions, but is a more general approach to handling rate limiting.
A pretty common pattern for handling rate-limiting is a backoff retrier. When a request fails due to a rate-limiting error, instead of naively retrying, or just failing, it can be useful to begin retrying, but at a decreasing rate.
For example, if a request fails, the program could retry once after 0.5s. If this fails too, it may then wait 1s before retrying again.
There already exist libraries in python for doing this, such as backoff, or you could just implement a basic version yourself. It might look something like this (just pseudocode)
retries = 0
max = 10
curWait = 1
waitInc = 1
While retries < max:
retries += 1
try:
return twitter api call
except ratelimiterror:
sleep (curWait)
curWait += waitInc
continue
# If we break out of the loop, must have exceeded max retries
raise ratelimiterror
This will retry a maximum of 10 times, delaying 1 second, and increasing the delay amount by 1 second each try.
Upvotes: 1