hamster
hamster

Reputation: 53

Web Crawler - TooManyRedirects: Exceeded 30 redirects. (python)

I've tried to follow one of the youtube tutorial however I've met some issue. Anyone able to help? I'm new to python, I understand that there is one or two similar question, however, I read and don't understand. Can someone help me out? Thanks

import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages):
    page = 1
    while page <= max_pages:
        url = "https://www.thenewboston.com/forum/home.php?page=" + str(page)
       source_code = requests.get(url)
        plain_text = source_code.text
        soup = BeautifulSoup(plain_text)
        for link in soup.findAll('a', {'class': 'post-title'}):
            href = link.get('href')
            print(href)
        page += 1
trade_spider(2)

Upon running the program I got the error as of below.

Traceback (most recent call last):
  File "C:/Users/User/PycharmProjects/Basic/WebCrawlerTest.py", line 19, in <module>
    trade_spider(2)
  File "C:/Users/User/PycharmProjects/Basic/WebCrawlerTest.py", line 9, in trade_spider
    source_code = requests.get(url)
  File "C:\Users\User\AppData\Roaming\Python\Python34\site-packages\requests\api.py", line 69, in get
    return request('get', url, params=params, **kwargs)
  File "C:\Users\User\AppData\Roaming\Python\Python34\site-packages\requests\api.py", line 50, in request
    response = session.request(method=method, url=url, **kwargs)
  File "C:\Users\User\AppData\Roaming\Python\Python34\site-packages\requests\sessions.py", line 465, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Users\User\AppData\Roaming\Python\Python34\site-packages\requests\sessions.py", line 594, in send
    history = [resp for resp in gen] if allow_redirects else []
  File "C:\Users\User\AppData\Roaming\Python\Python34\site-packages\requests\sessions.py", line 594, in <listcomp>
    history = [resp for resp in gen] if allow_redirects else []
  File "C:\Users\User\AppData\Roaming\Python\Python34\site-packages\requests\sessions.py", line 114, in resolve_redirects
    raise TooManyRedirects('Exceeded %s redirects.' % self.max_redirects)
requests.exceptions.TooManyRedirects: Exceeded 30 redirects.

Upvotes: 3

Views: 5690

Answers (2)

Ajay
Ajay

Reputation: 5347

The url to that forum has changed

Two modifications for your code

Changed the forum 1.url(https://www.thenewboston.com/forum/recent_activity.php?page=" + str(page))
allow_redirects=False (to disable redirects if any).

import requests
from bs4 import BeautifulSoup
def trade_spider(max_pages):
    page = 1
    while page <= max_pages:
        url = "https://www.thenewboston.com/forum/recent_activity.php?page=" + str(page)
        print url
        source_code = requests.get(url, allow_redirects=False)
        plain_text = source_code.text
        soup = BeautifulSoup(plain_text)
        for link in soup.findAll('a', {'class': 'post-title'}):

            href = link.get('href')
            print(href)
        page += 1
print trade_spider(2)

Upvotes: 1

Josh Kupershmidt
Josh Kupershmidt

Reputation: 2710

Well, it appears the page you are attempting to crawl is just plain broken: try putting https://www.thenewboston.com/forum/home.php?page=1 in your web browser: when I try with Chrome, I get the error message:

This webpage has a redirect loop

ERR_TOO_MANY_REDIRECTS

You'll have to determine for yourself how you want to deal with such broken pages in your crawler.

Upvotes: 1

Related Questions