Cmag
Cmag

Reputation: 15770

Python lxml/beautiful soup to find all links on a web page

I am writing a script to read a web page, and build a database of links that matches a certain criteria. Right now I am stuck with lxml and understanding how to grab all the <a href>'s from the html...

result = self._openurl(self.mainurl)
content = result.read()
html = lxml.html.fromstring(content)
print lxml.html.find_rel_links(html,'href')

Upvotes: 10

Views: 16140

Answers (4)

Saeed Ghareh Daghi
Saeed Ghareh Daghi

Reputation: 1205

You can use this method:

from urllib.parse import urljoin, urlparse
from lxml import html as lh
class Crawler:
     def __init__(self, start_url):
         self.start_url = start_url
         self.base_url = f'{urlparse(self.start_url).scheme}://{urlparse(self.start_url).netloc}'
         self.visited_urls = set()

     def fetch_urls(self, html):
         urls = []
         dom = lh.fromstring(html)
         for href in dom.xpath('//a/@href'):
              url = urljoin(self.base_url, href)
              if url not in self.visited_urls and url.startswith(self.base_url):
                   urls.append(url)
         return urls

Upvotes: 1

Stack Exchange User
Stack Exchange User

Reputation: 778

With iterlinks, lxml provides an excellent function for this task.

This yields (element, attribute, link, pos) for every link [...] in an action, archive, background, cite, classid, codebase, data, href, longdesc, profile, src, usemap, dynsrc, or lowsrc attribute.

Upvotes: 5

吳強福
吳強福

Reputation: 373

I want to provide an alternative lxml-based solution.

The solution uses the function provided in lxml.cssselect

    import urllib
    import lxml.html
    from lxml.cssselect import CSSSelector
    connection = urllib.urlopen('http://www.yourTargetURL/')
    dom =  lxml.html.fromstring(connection.read())
    selAnchor = CSSSelector('a')
    foundElements = selAnchor(dom)
    print [e.get('href') for e in foundElements]

Upvotes: 2

Fred Foo
Fred Foo

Reputation: 363707

Use XPath. Something like (can't test from here):

urls = html.xpath('//a/@href')

Upvotes: 15

Related Questions