Kevin Burke
Kevin Burke

Reputation: 64756

Captchas in Scrapy

I'm working on a Scrapy app, where I'm trying to login to a site with a form that uses a captcha (It's not spam). I am using ImagesPipeline to download the captcha, and I am printing it to the screen for the user to solve. So far so good.

My question is how can I restart the spider, to submit the captcha/form information? Right now my spider requests the captcha page, then returns an Item containing the image_url of the captcha. This is then processed/downloaded by the ImagesPipeline, and displayed to the user. I'm unclear how I can resume the spider's progress, and pass the solved captcha and same session to the spider, as I believe the spider has to return the item (e.g. quit) before the ImagesPipeline goes to work.

I've looked through the docs and examples but I haven't found any ones that make it clear how to make this happen.

Upvotes: 6

Views: 8530

Answers (2)

friso seyferth
friso seyferth

Reputation: 31

I would not create an Item and use the ImagePipeline.

import urllib
import os
import subprocess

...

def start_requests(self):
    request = Request("http://webpagewithcaptchalogin.com/", callback=self.fill_login_form)
    return [request]      

def fill_login_form(self,response):
    x = HtmlXPathSelector(response)
    img_src = x.select("//img/@src").extract()

    #delete the captcha file and use urllib to write it to disk
    os.remove("c:\captcha.jpg")
    urllib.urlretrieve(img_src[0], "c:\captcha.jpg")

    # I use an program here to show the jpg (actually send it somewhere)
    captcha = subprocess.check_output(r".\external_utility_solving_captcha.exe")

    # OR just get the input from the user from stdin
    captcha = raw_input("put captcha in manually>")  

    # this function performs the request and calls the process_home_page with
    # the response (this way you can chain pages from start_requests() to parse()

    return [FormRequest.from_response(response,formnumber=0,formdata={'user':'xxx','pass':'xxx','captcha':captcha},callback=self.process_home_page)]

    def process_home_page(self, response):
        # check if you logged in etc. etc. 

...

What I do here is that I import urllib.urlretrieve(url) (to store the image), os.remove(file) (to delete the previous image), and subprocess.checoutput (to call an external command line utility to solve the captcha). The whole Scrapy infrastructure is not used in this "hack", because solving a captcha like this is always a hack.

That whole calling external subprocess thing could have been one nicer, but this works.

On some sites it's not possible to save the captcha image and you have to call the page in a browser and call a screen_capture utility and crop on an exact location to "cut out" the captcha. Now that is screenscraping.

Upvotes: 3

user
user

Reputation: 18529

This is how you might get it to work inside the spider.

self.crawler.engine.pause()
process_my_captcha()
self.crawler.engine.unpause()

Once you get the request, pause the engine, display the image, read the info from the user& resume the crawl by submitting a POST request for login.

I'd be interested to know if the approach works for your case.

Upvotes: 5

Related Questions