Reputation: 31
New to programming
Can't scrape content from some domain belonging to the same website.
For example, I can scrape it.example.com
, es.example.com
, pt.example.com
but when I try to do the same with fr.example.com
or us.example.com
, I get:
2017-12-17 14:20:27 [scrapy.extensions.telnet] DEBUG: Telnet console
listening on 127.0.0.1:6025
2017-12-17 14:21:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages
(at
0 pages/min), scraped 0 items (at 0 items/min)
2017-12-17 14:22:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages
(at
0 pages/min), scraped 0 items (at 0 items/min)
2017-12-17 14:22:38 [scrapy.downloadermiddlewares.retry] DEBUG:
Retrying
<GET https://fr.example.com/robots.txt> (failed 1 times): TCP
connection
timed out: 110: Connection timed out.
Here's the Spider some.py
import scrapy
import itertools
class SomeSpider(scrapy.Spider):
name = 'some'
allowed_domains = ['https://fr.example.com']
def start_requests(self):
categories = [ 'thing1', 'thing2', 'thing3',]
base = "https://fr.example.com/things?t={category}&p={index}"
for category, index in itertools.product(categories, range(1, 11)):
yield scrapy.Request(base.format(category=category, index=index))
def parse(self, response):
response.selector.remove_namespaces()
info1 = response.css("span.info1").extract()
info2 = response.css("span.info2").extract()
for item in zip(info1, info2):
scraped_info = {
'info1': item[0],
'info2': item[1]
}
yield scraped_info
What I have tried:
Run the spider from a different IP (same problem with the same domains)
Add a pool of IPs (didn't work)
Found somewhere on Stackoverflow: in setting.py
, set
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95
Safari/537.36'
ROBOTSTXT_OBEY = False
Any idea is welcome!
Upvotes: 2
Views: 2934
Reputation: 5677
Try to access the page with the requests
package instead of scrapy
, and see if it works.
import requests
url = 'fr.example.com'
response = requests.get(url)
print(response.text)
Upvotes: 2