Reputation: 563
I'm faced with a problem that our android app's HTTP requests quite often time out and I need to find a reasonable level for the timeout limits and the number of retries.
The current implemented solution (not my implementation) using apache DefaultHttpClient is doing three manual retries with increasing timeouts as followed here:
private static final int[] CONNECTION_TIMEOUTS = new int[] {4000, 5000, 10000};
private static final int[] SOCKET_TIMEOUTS = new int[] {5000, 8000, 15000};
I'm having a hard time understanding the rationale to why the current implementation is using increasing timeouts and what this is trying to solve. The app is most of the time used when the phone is connected to 3G. Does anyone have an explanation to why increasing timeouts with each retry would be preferable or perhaps someone has a best practice for HTTP request handling on 3G networks?
Upvotes: 0
Views: 1723
Reputation: 89576
I don't see any reason for doing that and not just using the biggest timeout from the start. Maybe someone else can, though.
Maybe a bit offtopic, but I'd like to draw your attention to this article, which suggests migrating to HttpURLConnection
, as it is and will be better supported in the future. Read it in whole, to see the advantages and disadvantages of HttpURLConnection
over the Apache libs and decide whether or not it is worth it to switch.
Upvotes: 2
Reputation: 7645
It's possible that with a really slow network, it might take a long time to connect. And on the other hand, it's possible that a connection would fail even on a faster network.
So it would make sense to have the first attempt with a shorter timeout to retry sooner in the case of a connection getting a bit "lost" in a fast network. But I can't think of a reason for the timeouts to continually increase.
Because there are such a large number of networks, it's almost impossible to collect good data about typical connection and timeout times. I assume that the numbers you see are not chosen empirically.
Upvotes: 1