Reputation: 621
I have a list of urls and I want my code to loop through multiple pages of these multiple urls
urls = ['https://www.f150forum.com/f118/2019-adding-adaptive-cruise-454662/','https://www.f150forum.com/f118/adaptive-cruise-control-sensor-blockage-446041/']
comments = []
for url in urls:
with requests.Session() as req:
for item in range(1):
response = req.get(url+"index{item}/")
soup = BeautifulSoup(response.content, "html.parser")
for item in soup.findAll('div',attrs={"class":"ism-true"}):
result = [item.get_text(strip=True, separator=" ")]
comments.append(result)
The above code through an error. Can you let me know how to loop through multiple pages. The error I am getting is "NoneType' object has no attribute 'findAll"
Upvotes: 1
Views: 282
Reputation: 8352
soup
can return None
.
Only continue if soup has a value.
soup = BeautifulSoup(response.content, "html.parser")
if soup:
for item in soup.findAll('div',attrs={"class":"ism-true"}):
result = [item.get_text(strip=True, separator=" ")]
comments.append(result)
Note that response.content
is the response in binary,
response.text
is it in string form. If matching fails at all time, try the string form.
It also looks like you want an f-string for the url, if "item" is a number:
for item in range(1):
response = req.get(f"{url}index{item}/")
Upvotes: 1