alexandernst
alexandernst

Reputation: 15099

Feeding celery queue with it's own result

I'm coding a crawler for my SPA application. Since it's a SPA I can't use wget/curl or any other non-browser based solution for crawling, because I need a browser in order to run the javascript in my SPA.

I coded this using python and selenium. It will start at the homepage, scan for all href elements, save them in a set, discard the ones that I have already visited (visited as in opened with selenium and collected all the href elements), and take the next URL from the set and visit it. Then it will repeat the process over and over until it has visited all links.

The code looks like this:

def main():

    ...

    # Here we will be saving all the links that we can find in the DOM of
    # each visited URL
    collected = set()
    collected.add(crawler.start_url)

    # Here we will be saving all the URLs that we have already visited
    visited = set()

    base_netloc = urlparse(crawler.start_url).netloc

    while len(collected):
        url = collected.pop()

        urls = self.collect_urls(url)
        urls = [x for x in urls if x not in visited and urlparse(x).netloc == base_netloc]

        collected = collected.union(urls)
        visited.add(url)

    crawler.links = list(visited)
    crawler.save()

def collect_urls(self, url):
    browser = Browser()
    browser.fetch(url)

    urls = set()
    elements = browser.get_xpath_elements("//a[@href]")
    for element in elements:
        link = browser.get_element_attribute(element, "href")

        if link != url:
            urls.add(link)

    browser.stop()
    return urls

I want to make each call to collect_urls a Celery task, so it can retry if it fails, and also to make the entire thing faster (using several workers). The problem is that collect_urls is called from inside the while, which depends on the collected set, which is filled by the results of collect_urls.

I know I can call a Celery task with delay() and wait for the result with get(), so my code would then look like this:

    while len(collected):
        url = collected.pop()

        task = self.collect_urls.delay(url)
        urls = task.get(timeout=30)

That will convert my calls to collect_urls into Celery tasks, and it will allow me to retry if something fails, but I still won't be able to use more than one worker, as I need to wait for the result of delay().

How can I refactor my code in such a way that it would allow me to use several workers for collect_urls?

Upvotes: 1

Views: 216

Answers (1)

2ps
2ps

Reputation: 15946

Short answer, if you want this to be distributed for purposes of speed, you have to make the set of already visited websites into a cross-process-safe structure. You could do this, for example, by storing it as a set in redis or in a database table. Once you do that, you can update your code to do the following:

# kick off initial set of tasks:
result_id = uuid.uuid4()
for x in collected:
    task = self.collect_urls.delay(x, result_id)
return result_id

You can use that result_id to periodically check to the set of visited urls. Once that set has had the same length for n number of calls, you deem that it is done.

In the collect_urls function, you essentially do:

def collect_urls(self, url, result_id):
    # for example, you can use redis smember to check if the 
    # set at result_id contains url
    if url has been visited:
        return
    # you can do this in redis using sadd
    add url to the set of visited
    # collect urls as before
    ...
    # but instead of returning the urls, you kick off new tasks
    for x in urls:
        collect_urls.delay(x, result_id)

If you used redis, all of the collected / visited urls would be contained in the redis key identified by result_id. You don't have to use redis, you can just as easily do this with rows in a database which have result_id as one column, and the url in another.

Upvotes: 1

Related Questions