Shane
Shane

Reputation: 4983

Does urllib2.urlopen() cache stuff?

They didn't mention this in python documentation. And recently I'm testing a website simply refreshing the site using urllib2.urlopen() to extract certain content, I notice sometimes when I update the site urllib2.urlopen() seems not get the newly added content. So I wonder it does cache stuff somewhere, right?

Upvotes: 14

Views: 13113

Answers (5)

mirek
mirek

Reputation: 1350

If you make changes and test the behaviour from browser and from urllib, it is easy to make a stupid mistake. In browser you are logged in, but in urllib.urlopen your app can redirect you always to the same login page, so if you just see the page size or the top of your common layout, you could think that your changes have no effect.

Upvotes: 0

leoluk
leoluk

Reputation: 12971

So I wonder it does cache stuff somewhere, right?

It doesn't.

If you don't see new data, this could have many reasons. Most bigger web services use server-side caching for performance reasons, for example using caching proxies like Varnish and Squid or application-level caching.

If the problem is caused by server-side caching, usally there's no way to force the server to give you the latest data.


For caching proxies like squid, things are different. Usually, squid adds some additional headers to the HTTP response (response().info().headers).

If you see a header field called X-Cache or X-Cache-Lookup, this means that you aren't connected to the remote server directly, but through a transparent proxy.

If you have something like: X-Cache: HIT from proxy.domain.tld, this means that the response you got is cached. The opposite is X-Cache MISS from proxy.domain.tld, which means that the response is fresh.

Upvotes: 10

Luca
Luca

Reputation: 511

Your web server or an HTTP proxy may be caching content. You can try to disable caching by adding a Pragma: no-cache request header:

request = urllib2.Request(url)
request.add_header('Pragma', 'no-cache')
content = urllib2.build_opener().open(request)

Upvotes: 1

Chris
Chris

Reputation: 1643

Very old question, but I had a similar problem which this solution did not resolve.
In my case I had to spoof the User-Agent like this:

request = urllib2.Request(url)
request.add_header('User-Agent', 'Mozilla/5.0')
content = urllib2.build_opener().open(request)

Hope this helps anyone...

Upvotes: 5

Carol
Carol

Reputation: 9

I find it hard to believe that urllib2 does not do caching, because in my case, upon restart of the program the data is refreshed. If the program is not restarted, the data appears to be cached forever. Also retrieving the same data from Firefox never returns stale data.

Upvotes: -2

Related Questions