Reputation: 534
I am trying to scrape out all mobile data from the flipkart site using python 2.7 (through idle) and beautiful soup. Below is my code, In 1st part of my code, i am getting all the individual links of all samsung mobiles and in the 2nd part i am scraping all the mobile specification(td element) from those respective pages. But after few mobiles, i get the following error msg that
================================
>>>
Traceback (most recent call last):
File "E:\data base python\collectinghrefsamasungstack.py", line 16, in <module>
htmlfile = urllib.urlopen(url) #//.request is in 3.0x
File "C:\Python27\lib\urllib.py", line 87, in urlopen
return opener.open(url)
File "C:\Python27\lib\urllib.py", line 208, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 345, in open_http
h.endheaders(data)
File "C:\Python27\lib\httplib.py", line 969, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 829, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 791, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 772, in connect
self.timeout, self.source_address)
File "C:\Python27\lib\socket.py", line 571, in create_connection
raise err
IOError: [Errno socket error] [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
my code
import urllib
import re
from bs4 import BeautifulSoup
#part1
url="http://www.flipkart.com/mobiles/samsung~brand/pr?sid=tyy,4io"
regex = '<a class="fk-display-block" data-tracking-id="prd_title" href=(.+?)title' # it will find the title
pattern=re.compile(regex)
htmlfile = urllib.urlopen(url)
htmltext= htmlfile.read()
docSoup=BeautifulSoup(htmltext)
abc=docSoup.findAll('a')
c=str(abc)
count=0
#------part 2 it goes to each link and gathers the mobile specificattions
title=re.findall(pattern,c)
temp=1
file2=open('c:/Python27/samsung.txt','w')
for i in title:
print i
file2.write(i)
file2.write("\n")
count=count+1
print "\n1\n"
#print i
if temp>0 :
mob_url='http://www.flipkart.com'+i[1:len(i)-2]
htmlfile = urllib.urlopen(mob_url)
htmltext= htmlfile.read()
# htmltext
docSoup=BeautifulSoup(htmltext)
abc=docSoup.find_all('td')
file=open('c:/Python27/prut2'+str(count)+'.txt','w')
mod=0
count=count+1
pr=-1
for j in abc:
if j.text == 'Brand':
pr=3
if mod ==1:
file2.write((j).text)
file2.write("\n")
mod=0
if j.text == 'Model ID':
mod=1
#sprint j.text
if pr>0 :
file.write(j.text)
file.write('\n')
file.close
else :
temp=temp+1
print count
file2.close
I tried disabling the antivirus and the net connection which i am using is quite stable only, but still I am getting the error so is there any way I can fix it?
Upvotes: 1
Views: 448
Reputation: 14873
Maybe you open too many connections.
add htmlfile.close()
after htmltext= htmlfile.read()
.
Upvotes: 1