Reputation: 7961
I want to download few HTML pages from http://abc.com/view_page.aspx?ID= The ID is from an array of different numbers.
I would be interested in visiting multiple instances of this URL and saving the file as [ID].HTML using different proxy IP/ports.
I want to use different user-agents and I want to randomize the wait times before each download.
What is the best way of doing this? urllib2? pycURL? cURL? What do you prefer for the task at hand?
Please advise. Thanks guys!
Upvotes: 8
Views: 3569
Reputation: 4182
If you don't want to use open proxies, checkout ProxyMesh, which does IP rotation/randomization for you.
Upvotes: 2
Reputation: 29472
Use something like:
import urllib2
import time
import random
MAX_WAIT = 5
ids = ...
agents = ...
proxies = ...
for id in ids:
url = 'http://abc.com/view_page.aspx?ID=%d' % id
opener = urllib2.build_opener(urllib2.ProxyHandler({'http' : proxies[0]}))
html = opener.open(urllib2.Request(url, None, {'User-agent': agents[0]})).read()
open('%d.html' % id, 'w').write(html)
agents.append(agents.pop()) # cycle
proxies.append(proxies.pop())
time.sleep(MAX_WAIT*random.random())
Upvotes: 5
Reputation: 16246
Use unix tool wget
. It has option to specify custom user-agent and delay between each retrieval of the page.
You can see wget(1) man page for more information.
Upvotes: 2