Mighty God Loki
Mighty God Loki

Reputation: 135

My web crawler doesn't work with BeautifulSoup

I am trying to make a web crawler using Python. I am borrowing this code from Programming Collective intelligence book by Toby Segaran. Since the code from the book was outdated, I made some necessary changes but still the program doesn't execute as expected. Here is my code:

import urllib
from urllib import request
from bs4 import BeautifulSoup
from urllib.parse import urljoin
import bs4
# Create a list of words to ignore
ignorewords=set(['the','of','to','and','a','in','is','it'])

class crawler:
# Initialize the crawler with the name of database
    def __init__(self,dbname): 
        pass
    def __del__(self): pass
    def dbcommit(self):
        pass
    # Auxilliary function for getting an entry id and adding
    # it if it's not present
    def getentryid(self,table,field,value,createnew=True):
        return None
    # Index an individual page
    def addtoindex(self,url,soup):
        print('Indexing %s' % url)
    # Extract the text from an HTML page (no tags)
    def gettextonly(self,soup):
        return None
    # Separate the words by any non-whitespace character
    def separatewords(self,text):
        return None
    # Return true if this url is already indexed
    def isindexed(self,url):
        return False
    # Add a link between two pages
    def addlinkref(self,urlFrom,urlTo,linkText):
        pass
    # Starting with a list of pages, do a breadth
    # first search to the given depth, indexing pages
    # as we go
    def crawl(self,pages,depth=2):
        pass
    # Create the database tables
    def createindextables(self):
        pass

    def crawl(self,pages,depth=2):
        for i in range(depth):
            newpages=set( )
            for page in pages:
                try:
                    c=request.urlopen(page)
                except:
                    print("Could not open %s" % page)
                    continue
                soup=BeautifulSoup(c.read())
                self.addtoindex(page,soup)
                links=soup('a')
                for link in links:
                    if ('href' in dict(link.attrs)):
                        url=urljoin(page,link['href'])
                        if url.find("'")!=-1: continue
                        url=url.split('#')[0] # remove location portion
                        if url[0:4]=='http' and not self.isindexed(url):
                            newpages.add(url)
                        linkText=self.gettextonly(link)
                        self.addlinkref(page,url,linkText)
                self.dbcommit( )
        pages=newpages


pagelist=['http://google.com']
#pagelist=['file:///C:/Users/admin/Desktop/abcd.html']
crawler=crawler('')
crawler.crawl(pagelist)

the only output I get is "Indexing http://google.com" "Indexing http://google.com" press any key to continue...

Everytime I put another link in page list I get same output as "Indexing xyz" where xyz is every link I put in pagelist. I also tried making a HTML file with lots of <a> tags but it didn't work too.

Upvotes: 2

Views: 273

Answers (1)

HH1
HH1

Reputation: 598

The problem is in your line link=soup('a'). If you want to find elements of class 'a', you should use the different methods named 'find_element_by...' (cf bs4 documentation)

Upvotes: 2

Related Questions