Murat Demir
Murat Demir

Reputation: 716

How to solve Scrapy and Selenium Uncaught ReferenceError?

I am trying to scrape a website with selenium over Scrapy. I have change Scrapy response URL with selenium but when I try to return start_urls with function as following code:

Spider.py (start_urls):

    @property
    def start_urls(self):
        url = 'https://www.adana.bel.tr/home/hal_listesi' #The URL that script will scrape
        opts = Options() #Set options for headless and user-agent etc.
        #opts.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36")
        opts.add_argument('headless')#It helps to work without opening browser
        driver = webdriver.Chrome(options=opts,executable_path="chromedriver.exe")
        driver.get(url) #URL starts

        self.day = int(driver.find_element_by_xpath("/html/body/div/div[3]/div/div[2]/main/div/div/div/div/div/div/div[1]/div[1]/span[1]").text)
        self.month = months[driver.find_element_by_xpath("/html/body/div/div[3]/div/div[2]/main/div/div/div/div/div/div/div[1]/div[1]/span[2]").text]
        self.year = int(driver.find_element_by_xpath("/html/body/div/div[3]/div/div[2]/main/div/div/div/div/div/div/div[1]/div[1]/span[3]").text)

        product_list =driver.find_element_by_xpath("/html/body/div/div[3]/div/div[2]/main/div/div/div/div/div/div/div[1]/div[3]/a/img")
        product_list.click()
        new_url = driver.current_url
        driver.quit
        return [new_url]

I am using self. dates because I have to get the date from the first page.

It starts returning 3 times and gives me the following error 3 times. It takes too long and I don't get why it keeps giving me the error.

Output:

DevTools listening on ws://127.0.0.1:50639/devtools/browser/cd5830e4-5a11-4f28-a12d-cb605e96075d
[1103/153027.438:INFO:CONSOLE(54)] "Mixed Content: The page at 'https://www.adana.bel.tr/home/hal_listesi' was loaded over HTTPS, but requested an insecure stylesheet 'http://netdna.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css'. This request has been blocked; the content must be served over HTTPS.", source: https://www.adana.bel.tr/home/hal_listesi (54)
[1103/153032.344:INFO:CONSOLE(54)] "Mixed Content: The page at 'https://www.adana.bel.tr/hal-detay/396' was loaded over HTTPS, but requested an insecure stylesheet 'http://netdna.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css'. This request has been blocked; the content must 
be served over HTTPS.", source: https://www.adana.bel.tr/hal-detay/396 (54)
[1103/153032.391:INFO:CONSOLE(1520)] "Uncaught ReferenceError: $ is not defined", source: https://www.adana.bel.tr/hal-detay/396 (1520)

DevTools listening on ws://127.0.0.1:50673/devtools/browser/e664e7e2-1c13-4128-bb20-a3df6437d2c7
[1103/153035.939:INFO:CONSOLE(54)] "Mixed Content: The page at 'https://www.adana.bel.tr/home/hal_listesi' was loaded over HTTPS, but requested an insecure stylesheet 'http://netdna.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css'. This request has been blocked; the content must be served over HTTPS.", source: https://www.adana.bel.tr/home/hal_listesi (54)
[1103/153038.668:INFO:CONSOLE(54)] "Mixed Content: The page at 'https://www.adana.bel.tr/hal-detay/396' was loaded over HTTPS, but requested an insecure stylesheet 'http://netdna.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css'. This request has been blocked; the content must 
be served over HTTPS.", source: https://www.adana.bel.tr/hal-detay/396 (54)
[1103/153038.710:INFO:CONSOLE(1520)] "Uncaught ReferenceError: $ is not defined", source: https://www.adana.bel.tr/hal-detay/396 (1520)

DevTools listening on ws://127.0.0.1:50707/devtools/browser/5fcb91e4-a076-4aa2-9173-7fd3565f741f
[1103/153042.020:INFO:CONSOLE(54)] "Mixed Content: The page at 'https://www.adana.bel.tr/home/hal_listesi' was loaded over HTTPS, but requested an insecure stylesheet 'http://netdna.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css'. This request has been blocked; the content must be served over HTTPS.", source: https://www.adana.bel.tr/home/hal_listesi (54)
[1103/153045.407:INFO:CONSOLE(54)] "Mixed Content: The page at 'https://www.adana.bel.tr/hal-detay/396' was loaded over HTTPS, but requested an insecure stylesheet 'http://netdna.bootstrapcdn.com/font-awesome/4.1.0/css/font-awesome.min.css'. This request has been blocked; the content must 
be served over HTTPS.", source: https://www.adana.bel.tr/hal-detay/396 (54)
[1103/153045.459:INFO:CONSOLE(1520)] "Uncaught ReferenceError: $ is not defined", source: https://www.adana.bel.tr/hal-detay/396 (1520)

Setting.py:

BOT_NAME = 'first_bot'

SPIDER_MODULES = ['first_bot.spiders']
NEWSPIDER_MODULE = 'first_bot.spiders'

# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36"

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

DOWNLOAD_DELAY = 3

ITEM_PIPELINES = {
    'first_bot.pipelines.FirstBotPipeline': 300,
}

So how can I solve this problem? It takes about 30 sec and it is too long for one URL.

Upvotes: 0

Views: 235

Answers (1)

Murat Demir
Murat Demir

Reputation: 716

I change start URL with parse request:

    def parse(self, response):
        now_date = datetime.today()-timedelta(days=1)
        self.day = response.xpath("//*[@class='day']/text()").extract()
        self.month = response.xpath("//*[@class='month']/text()").extract()
        self.year = response.xpath("//*[@class='year']/text()").extract()

        count = 0
        for check in self.day:
            if now_date.day == int(check):
                url =response.xpath("//*[@class='indir']/a/@href").extract()[count]
                self.curt_day = self.day[count]
                self.curt_month = self.month[count]
                self.curt_year = self.year[count]
            count +=1

        absolute_url = response.urljoin(url)
        request = scrapy.Request(
            absolute_url, callback=self.parse_contractors)
        yield request

Upvotes: 1

Related Questions