Reputation: 1592
I want to scrape the url present in the list. Basically I am scraping a website in I am scraping a link from that I am finding particular link an scraping those links and I search for another particular links a scrape it. My code:
from bs4 import BeautifulSoup
import urllib.request
import re
r = urllib.request.urlopen('http://i.cantonfair.org.cn/en/ExpExhibitorList.aspx?k=glassware')
soup = BeautifulSoup(r, "html.parser")
links = soup.find_all("a", href=re.compile(r"expexhibitorlist\.aspx\?categoryno=[0-9]+"))
linksfromcategories = ([link["href"] for link in links])
string = "http://i.cantonfair.org.cn/en/"
linksfromcategories = [string + x for x in linksfromcategories]
subcatlinks = list()
for link in linksfromcategories:
response = urllib.request.urlopen(link)
soup2 = BeautifulSoup(response, "html.parser")
links2 = soup2.find_all("a", href=re.compile(r"ExpExhibitorList\.aspx\?categoryno=[0-9]+"))
linksfromsubcategories = ([link["href"] for link in links2])
subcatlinks.append(linksfromsubcategories)
responses = urllib.request.urlopen(subcatlinks)
soup3 = BeautifulSoup(responses, "html.parser")
print (soup3)
And I am getting the error
Traceback (most recent call last):
File "D:\python\phase2.py", line 46, in <module>
responses = urllib.request.urlopen(subcatlinks)
File "C:\Users\amanp\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 162, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\amanp\AppData\Local\Programs\Python\Python35-32\lib\urllib\request.py", line 456, in open
req.timeout = timeout
AttributeError: 'list' object has no attribute 'timeout'
Upvotes: 0
Views: 958
Reputation: 2710
You can only pass in one link at a time to urllib.request.urlopen
as opposed to a whole list of them.
So you'll need another loop like this:
for link in subcatlinks:
response = urllib.request.urlopen(link)
soup3 = BeautifulSoup(response, "html.parser")
print(soup3)
Upvotes: 1