Reputation: 2547
Not really sure how to ask this question since I am just beginning to learn python but here it goes:
I have a web scraper that uses threading to grab info. I am looking for pricing and stock for about 900 products. When I test the script with about half of that, there is no problem. When I try to scrape all 900 products I get a can't start new thread error.
I imagine this is do to some memory constraint or it is because I am asking a server for too many requests
I would like to know if there is a way to slow down the threads or to stagger the requests.
Error Code:
Traceback (most recent call last):
File "C:\Python27\tests\dxpriceupdates.py", line 78, in <module>
t.start()
error: can't start new thread
>>>
Traceback (most recent call last):Exception in thread Thread-554:
Traceback (most recent call last):
File "C:\Python27\lib\urllib.py", line 346, in open_http
errcode, errmsg, headers = h.getreply()
File "C:\Python27\lib\httplib.py", line 1117, in getreply
response = self._conn.getresponse()
File "C:\Python27\lib\httplib.py", line 1045, in getresponse
response.begin()
File "C:\Python27\lib\httplib.py", line 441, in begin
self.msg = HTTPMessage(self.fp, 0)
File "C:\Python27\lib\mimetools.py", line 25, in __init__
rfc822.Message.__init__(self, fp, seekable)
File "C:\Python27\lib\rfc822.py", line 108, in __init__
self.readheaders()
File "C:\Python27\lib\httplib.py", line 308, in readheaders
self.addheader(headerseen, line[len(headerseen)+1:].strip())
MemoryError
<bound method Thread.__bootstrap of <Thread(Thread-221, stopped 9512)>>Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Unhandled exception in thread started by Unhandled exception in thread started by ...
Here is the python (The skulist.txt is just a text file like 12345, 23445, 5551,...):
from threading import Thread
import urllib
import re
import json
import math
def th(ur):
site = "http://dx.com/p/GetProductInfoRealTime?skus="+ur
htmltext = urllib.urlopen(site)
data = json.load(htmltext)
htmlrates = urllib.urlopen("http://rate-exchange.appspot.com/currency?from=USD&to=AUD")
datarates = json.load(htmlrates)
if data['success'] == True:
if data['data'][0]['discount'] is 0:
price = float(data['data'][0]['price'])
rate = float(datarates['rate']) + 0.12
cost = price*rate
if cost <= 5:
saleprice = math.ceil(cost*1.7) - .05
elif (cost >5) and (cost <= 10):
saleprice = math.ceil(cost*1.6) - .05
elif (cost >10) and (cost <= 15):
saleprice = math.ceil(cost*1.55) - .05
else:
saleprice = math.ceil(cost*1.5) - .05
if data['data'][0]['issoldout']:
soldout = "Out Of Stock"
enabled = "Disable"
qty = "0"
else:
soldout = "In Stock"
enabled = "Enabled"
qty = "9999"
#print model, saleprice, soldout, qty, enabled
myfile.write(str(ur)+","+str(saleprice)+","+str(soldout)+","+str(qty)+","+str(enabled)+"\n")
else:
price = float(data['data'][0]['listprice'])
rate = float(datarates['rate']) + 0.12
cost = price*rate
if cost <= 5:
saleprice = math.ceil(cost*1.7) - .05
elif (cost >5) and (cost <= 10):
saleprice = math.ceil(cost*1.6) - .05
elif (cost >10) and (cost <= 15):
saleprice = math.ceil(cost*1.55) - .05
else:
saleprice = math.ceil(cost*1.5) - .05
if data['data'][0]['issoldout']:
soldout = "Out Of Stock"
enabled = "Disable"
qty = "0"
else:
soldout = "In Stock"
enabled = "Enabled"
qty = "9999"
#print model, saleprice, soldout, qty, enabled
myfile.write(str(ur)+","+str(saleprice)+","+str(soldout)+","+str(qty)+","+str(enabled)+"\n")
else:
qty = "0"
print ur, "error \n"
myfile.write(str(ur)+","+"0.00"+","+"Out Of Stock"+","+str(qty)+","+"Disable\n")
skulist = open("skulist.txt").read()
skulist = skulist.replace(" ", "").split(",")
myfile = open("prices/price_update.txt", "w+")
myfile.close()
myfile = open("prices/price_update.txt", "a")
threadlist = []
for u in skulist:
t = Thread(target=th,args=(u,))
t.start()
threadlist.append(t)
for b in threadlist:
b.join()
myfile.close()
Upvotes: 1
Views: 949
Reputation: 33046
Don't fire 900 threads at once, your PC could literally choke! Instead, use a pool and distribute the activity on a certain number of workers. Use multiprocessing
like this:
from multiprocessing import Pool
WORKERS = 10
p = Pool(WORKERS)
p.map(tr, skulist)
Find the right value for WORKERS
by experimenting a bit.
Upvotes: 4