Reputation: 145
Scraping the following website, http://www.starcitygames.com/buylist/, but I keep getting the following error, and I do not know what is causing it. When I first wrote the program it was working just fine without any errors, scraping the data I needed and everything but now I am getting this error and I have no idea why, tried changing the Splash URL and the user agent but that did not work, still gave me same error:
2019-07-23 12:37:28 [scrapy.core.engine] INFO: Spider opened
2019-07-23 12:37:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-07-23 12:37:28 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-07-23 12:37:28 [scrapy.extensions.throttle] INFO: slot: www.starcitygames.com | conc: 1 | delay:15000 ms (+0) | latency: 148 ms | size: 0 bytes
2019-07-23 12:37:28 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (307) to <GET https://www.starcitygames.com/login> from <GET http://www.starcitygames.com/buylist/>
2019-07-23 12:37:43 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.starcitygames.com/login> (failed 1 times): An error occurred while connecting: 13: Permission denied.
2019-07-23 12:38:04 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.starcitygames.com/login> (failed 2 times): An error occurred while connecting: 13: Permission denied.
2019-07-23 12:38:24 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://www.starcitygames.com/login> (failed 3 times): An error occurred while connecting: 13: Permission denied.
2019-07-23 12:38:24 [scrapy.core.scraper] ERROR: Error downloading <GET https://www.starcitygames.com/login>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.ConnectError: An error occurred while connecting: 13: Permission denied.
2019-07-23 12:38:24 [scrapy.core.engine] INFO: Closing spider (finished)
LoginSpider.py
# Import needed functions and call needed python files
import scrapy
import json
from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from ..items import DataItem
# Spider class
class LoginSpider(scrapy.Spider):
# Name of spider
name = "LoginSpider"
#URL where dated is located
start_urls = ["http://www.starcitygames.com/buylist/"]
# Login function
def parse(self, response):
# Login using email and password than proceed to after_login function
return scrapy.FormRequest.from_response(
response,
formcss='#existing_users form',
formdata={'ex_usr_email': '[email protected]', 'ex_usr_pass': 'password'},
callback=self.after_login
)
# Function to barse buylist website
def after_login(self, response):
# Loop through website and get all the ID numbers for each category of card and plug into the end of the below
# URL then go to parse data function
for category_id in response.xpath('//select[@id="bl-category-options"]/option/@value').getall():
yield scrapy.Request(
url="http://www.starcitygames.com/buylist/search?search-type=category&id={category_id}".format(category_id=category_id),
callback=self.parse_data,
)
# Function to parse JSON dasta
def parse_data(self, response):
# Declare variables
jsonreponse = json.loads(response.body_as_unicode())
# Call DataItem class from items.py
items = DataItem()
# Scrape category name
items['Category'] = jsonreponse['search']
# Loop where other data is located
for result in jsonreponse['results']:
# Inside this loop, run through loop until all data is scraped
for index in range(len(result)):
# Scrape the rest of needed data
items['Card_Name'] = result[index]['name']
items['Condition'] = result[index]['condition']
items['Rarity'] = result[index]['rarity']
items['Foil'] = result[index]['foil']
items['Language'] = result[index]['language']
items['Buy_Price'] = result[index]['price']
# Return all data
yield items
settings.py
# Name of project
BOT_NAME = 'LoginSpider'
# Module where spider is
SPIDER_MODULES = ['LoginSpider.spiders']
# Mode where to create new spiders
NEWSPIDER_MODULE = 'LoginSpider.spiders'
# Obey robots.txt rules set by website, disable to not be detected as web scraper
ROBOTSTXT_OBEY = False
# The path of the csv file that contains the proxies/user agnets paired with URLs
#PROXY_CSV_FILE = "url.csv"
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36'
# The downloader middleware is a framework of hooks into Scrapy's request/response processing.
# It's a light, low-level system for globally altering Scrapy's requests and responses.
DOWNLOADER_MIDDLEWARES = {
# This middleware enables working with sites that require cookies, such as those that use sessions.
# It keeps track of cookies sent by web servers, and send them back on subsequent requests (from that spider), just like web browsers do.
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
# This middleware allows compressed (gzip, deflate) traffic to be sent/received from web sites.
# This middleware also supports decoding brotli-compressed responses, provided brotlipy is installed.
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
# URL that splash server is running on, must be activated to use splash
SPLASH_URL = 'http://199.89.192.98:8050'
# The class used to detect and filter duplicate requests
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
# This middleware provides low-level cache to all HTTP requests and responses. It has to be combined with a cache storage backend as well as a cache policy.
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
# The maximum number of concurrent (ie. simultaneous) requests that will be performed by the Scrapy downloader (default: 16)
CONCURRENT_ITEMS = 1
CONCURRENT_REQUESTS = 1
# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
# If enabled, Scrapy will wait a random amount of time (between 0.5 * DOWNLOAD_DELAY and 1.5 * DOWNLOAD_DELAY) while fetching requests from the same website.
RANDOMIZE_DOWNLOAD_DELAY = True
# Delay between scraping webpages
DOWNLOAD_DELAY = 10
# The download delay setting will honor only one of:
# Number of concurrent requests made to one URL(enabled)
CONCURRENT_REQUESTS_PER_DOMAIN = 1
# Number of concurrent requests made to one IP(disabled)
#CONCURRENT_REQUESTS_PER_IP = 1
# Disable cookies (enabled by default)
# Whether to enable the cookies middleware. If disabled, no cookies will be sent to web servers.
COOKIES_ENABLED = True
#REDIRECT_ENABLED = False
# Disable Telnet Console (enabled by default)
# A boolean which specifies if the telnet console will be enabled (provided its extension is also enabled)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Referer': 'http://www.starcitygames.com/buylist/'
}
Upvotes: 0
Views: 4080
Reputation: 145
What ended fixing was installing Scrapy-Cookies, which this middleware enable Scrapy manage, save and restore cookies in various ways. With this middleware Scrapy can easily re-use cookies which saved before or in multiple spiders, and share cookies between spiders, even in spider-cluster. So being able to share cookies fixed the problem. Plus I also added this code to my settings.py
DOWNLOADER_MIDDLEWARES.update({
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': None,
'scrapy_cookies.downloadermiddlewares.cookies.CookiesMiddleware': 700,
})
COOKIES_STORAGE = 'scrapy_cookies.storage.sqlite.SQLiteStorage'
COOKIES_SQLITE_DATABASE = ':memory:'
COOKIES_PERSISTENCE_DIR = 'your-cookies-path'
Upvotes: 0
Reputation: 1062
Most sites don't like scrape programs since they put a lot of load on their servers.
You can try increasing the value of DOWNLOAD_DELAY
in your code, or alternatively try another method for scraping that is perhaps more website friendly, like using Selenium.
Upvotes: 0
Reputation: 1815
Permission denied.
In 99% cases, it means that your IP gets banned for some period of time. What I would recommend:
tor-polipo-haproxy
docker image. Maybe it will helpUpvotes: 1