Reputation: 36404
My system is not behind any proxy.
params = urllib.urlencode({'search':"August Rush"})
f = urllib.urlopen("http://www.thepiratebay.org/search/query", params)
This goes onto an infinite loop(Or just hangs). I can obviously get rid of this and use FancyUrlOpener and create the query myself rather than passing it parameters. But, I think doing the way I'm doing now is a better and cleaner approach.
Edit: This was more of a networking problem in which my Ubuntu workstation was configured to a different proxy. Had to do certain changes and it worked. Thank you!
Upvotes: 0
Views: 132
Reputation: 59604
This works for me:
import urllib
params = urllib.urlencode({'q': "August Rush", 'page': '0', 'orderby': '99'})
f = urllib.urlopen("http://www.thepiratebay.org/s/", params)
with open('text.html', 'w') as ff:
ff.write('\n'.join(f.readlines()))
I opened http://www.thepiratebay.org with Google Chrome with network inspector enabled. I put "August Rush" into the search field and pressed 'Search'. Then i analyzed the headers sent and did the code above.
Upvotes: 1
Reputation: 4727
The posted code works fine for me, with Python 2.7.2 on Windows.
Have you tried using a http-debugging tool, like Fiddler2 to see the actual conversation going between your program and the site?
If you run Fiddler2 on port 8888 on localhost, you can do this to see the request and response:
import urllib
proxies = {"http": "http://localhost:8888"}
params = urllib.urlencode({'search':"August Rush"})
f = urllib.urlopen("http://www.thepiratebay.org/search/query", params, proxies)
print len(f.read())
Upvotes: 1