Reputation: 559
I am trying to use Rule
and LinkExtractor
to extract links, this is my code in scrapy shell
from urllib.parse import quote
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
url= f'https://www.google.com/search?q={quote("Hello World")}'
fetch(url)
x=LinkExtractor(restrict_xpaths='//div[@class="r"]/a')
y=Rule(x)
I tried to use dir(x)
to see what methods I can apply on it best of I can find is x.__sizeof__()
but this shows 32 instead of actual 10 links.
My question is how can I find out what links are actually extracted using them (a list like).
this is what dir(x)
shows
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_csstranslator', '_extract_links', '_link_allowed', '_process_links', 'allow_domains', 'allow_res', 'canonicalize', 'deny_domains', 'deny_extensions', 'deny_res', 'extract_links', 'link_extractor', 'matches', 'restrict_xpaths']
Upvotes: 2
Views: 172
Reputation: 1487
You can use the following method to get exactly what is extracted
x=LinkExtractor(restrict_xpaths='//div[@class="r"]/a')
links_objects=x.extract_links(response) # a list like
for the actual urls you can use
for link in links_objects:
print(link.url) #links
Upvotes: 2