ekta
ekta

Reputation: 1620

Is selenium thread safe for scraping with Python?

I am executing a Python script with Threading, where given a "query" term that I put in the Queue, I create the url with the query parameters, set the cookies & parse the webpage to return the Products & the urls of those products. Here's the script.

Task : For a given set of queries, store the top 20 product ids in a file, or lower # if the query returns fewer results.

I remember reading that Selenium is not thread safe. Just want to make sure that this problem occurs because of that limitation, and is there a way to make it work in concurrent threads ? The main problem is that the script was I/O bound, so very slow for scraping about 3000 url fetches.

from pyvirtualdisplay import Display
from data_mining.scraping import scraping_conf as sf #custom file with rules for scraping
import Queue
import threading
import urllib2
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By

num_threads=5
COOKIES=sf.__MERCHANT_PARAMS[merchant_domain]['COOKIES']
query_args =sf.__MERCHANT_PARAMS[merchant_domain]['QUERY_ARGS']


class ThreadUrl(threading.Thread):
    """Threaded Url Grab"""
    def __init__(self, queue, out_queue):
        threading.Thread.__init__(self)
        self.queue = queue
        self.out_queue = out_queue

    def url_from_query(self,query):
        for key,val in query_args.items():
            if query_args[key]=='query' :
                query_args[key]=query
                print "query", query
            try :
                url = base_url+urllib.urlencode(query_args)
                print "url"
                return url
            except Exception as e:
                log()
                return None


    def init_driver_and_scrape(self,base_url,query,url):
        # Will use Pyvirtual display later 
        #display = Display(visible=0, size=(1024, 768))
        #display.start()
        fp = webdriver.FirefoxProfile()
        fp.set_preference("browser.download.folderList",2)
        fp.set_preference("javascript.enabled", True)
        driver = webdriver.Firefox(firefox_profile=fp)
        driver.delete_all_cookies()
        driver.get(base_url)
        for key,val in COOKIES[exp].items():
            driver.add_cookie({'name':key,'value':val,'path':'/','domain': merchant_domain,'secure':False,'expiry':None})
        print "printing cookie name & value"
        for cookie in driver.get_cookies():
            if cookie['name'] in COOKIES[exp].keys():
                print cookie['name'],"-->", cookie['value']
        driver.get(base_url+'search=junk') # To counter any refresh issues
        driver.implicitly_wait(20)
        driver.execute_script("window.scrollTo(0, 2000)")
        print "url inside scrape", url
        if url is not None :
            flag = True
            i=-1
            row_data,row_res=(),()
            while flag :
                i=i+1
                try :
                    driver.get(url)
                    key=sf.__MERCHANT_PARAMS[merchant_domain]['GET_ITEM_BY_ID']+str(i)
                    print key
                    item=driver.find_element_by_id(key)
                    href=item.get_attribute("href")
                    prod_id=eval(sf.__MERCHANT_PARAMS[merchant_domain]['PRODUCTID_EVAL_FUNC'])
                    row_res=row_res+(prod_id,)
                    print url,row_res
                except Exception as e:
                    log()
                    flag =False
            driver.delete_all_cookies()
            driver.close()

            return query+"|"+str(row_res)+"\n"  #  row_data, row_res
        else :
            return  [query+"|"+"None"]+"\n"
    def run(self):
        while True:
            #grabs host from queue
            query = self.queue.get()
            url=self.url_from_query(query)
            print "query, url", query, url
            data=self.init_driver_and_scrape(base_url,query,url)
            self.out_queue.put(data)

            #signals to queue job is done
            self.queue.task_done()


class DatamineThread(threading.Thread):
    """Threaded Url Grab"""
    def __init__(self, out_queue):
        threading.Thread.__init__(self)
        self.out_queue = out_queue

    def run(self):
        while True:
            #grabs host from queue
            data = self.out_queue.get()
            fh.write(str(data)+"\n")
            #signals to queue job is done
            self.out_queue.task_done()

start = time.time()

def log():
    logging_hndl=logging.getLogger("get_results_url")
    logging_hndl.exception("Stacktrace from "+"get_results_url")


df=pd.read_csv(fh_query, sep='|',skiprows=0,header=0,usecols=None,error_bad_lines=False) # read all queries
query_list=list(df['query'].values)[0:3]

def main():
    exp="Control"
    #spawn a pool of threads, and pass them queue instance
    for i in range(num_threads):
        t = ThreadUrl(queue, out_queue)
        t.setDaemon(True)
        t.start()

    #populate queue with data
    print query_list
    for query in query_list:
        queue.put(query)

    for i in range(num_threads):
        dt = DatamineThread(out_queue)
        dt.setDaemon(True)
        dt.start()


    #wait on the queue until everything has been processed
    queue.join()
    out_queue.join()

main()
print "Elapsed Time: %s" % (time.time() - start)

While I should be getting, all search results from each url page, I get only the 1st , i=0 search card and this doesn't execute for all queries/urls. What am I doing wrong ?

What I expect -

url inside scrape http://<masked>/search=nike+costume
searchResultsItem0
url inside scrape http://<masked>/search=red+tops
searchResultsItem0
url inside scrape http://<masked>/search=halloween+costumes
searchResultsItem0
and more searchResultsItem(s) , like searchResultsItem1,searchResultsItem2 and so on..

What I get

url inside scrape http://<masked>/search=nike+costume
searchResultsItem0
url inside scrape http://<masked>/search=nike+costume
searchResultsItem0
url inside scrape http://<masked>/search=nike+costume
searchResultsItem0

The skeleton code was taken from

http://www.ibm.com/developerworks/aix/library/au-threadingpython/

Additionally when I use Pyvirtual display, will that work with Threading as well ? I also used processes with the same Selenium code, and it gave the same error. Essentially it opens up 3 Firefox browsers, with exact urls, while it should be opening them from different items in the queue. Here I stored the rules in file that will import as sf, which has all custom attributes of a Base Domain.

Since setting the cookies is an integral part of my script, I can't use dryscrape.

EDIT : I tried to localize the error, and here's what I found - In the custom rules file, I call "sf" above, I had defined, QUERY_ARGS as

__MERCHANT_PARAMS = {
  "some_domain.com" :
  {
    COOKIES: { <a dict of dict, masked here>
              },
    ... more such rules
    QUERY_ARGS:{'search':'query'}
  }

So what is really happening is , that on calling,

query_args =sf.__MERCHANT_PARAMS[merchant_domain]['QUERY_ARGS'] - this should return the dict {'search':'query'}, while it returns,

AttributeError: 'module' object has no attribute '_ThreadUrl__MERCHANT_PARAMS'

This is where I don't understand how the thread is passing '_ThreadUrl__' I also tried re-initializing the query_args,inside the url_from_query method, but this doesn't work.

Any pointers, on what am I doing wrong ?

Upvotes: 3

Views: 2752

Answers (1)

chandank
chandank

Reputation: 1011

I may be replying pretty late to this. However, I tested it python2.7 and both options multithreading and mutliprocess works with selenium and it is opening two separate browsers.

Upvotes: 1

Related Questions