omneer
omneer

Reputation: 93

How to avoid StaleElementReferenceException error in python selenium

There are tons of questions about this error however none of them could be able to help my case.

First i get the main url's ids that i need

url = 'http://www.mosquedirectory.co.uk/browse/uk/england/london'

browser = webdriver.Chrome()
browser.get(url)

listing = browser.find_elements_by_id('directory_listingBrowse')

and I append them in to list to avoid from the error but it is not even worked

hold = []

for i in listing:
    hold.append(i)

and from these hold list I loop in for loop and this is also the rest of the code

for i in hold:
     
    try: ulclass = i.find_elements_by_css_selector('ul.c')

    except StaleElementReferenceException:
        pass
        
    link = []
    for i in ulclass:
        a = i.find_element_by_tag_name('a')
        link.append(a.get_attribute('href'))
        
    for i in link:
        browser.get(i)

    browser.get(url)
    time.sleep(2)

I even tried to avoid the error by using try method, not worked again. And at the end of the code I say get back to old page, again to avoid the error. It's not even worked again. Which part am I missing.

Upvotes: 1

Views: 185

Answers (2)

0buz
0buz

Reputation: 3503

Slight change of approach. Collect all links first, then process them as you wish.

url = 'http://www.mosquedirectory.co.uk/browse/uk/england/london'

browser = webdriver.Chrome()
browser.get(url)

listings = WebDriverWait(browser, 30).until(EC.presence_of_all_elements_located((By.XPATH, "//div[@id='directory_listingBrowse']//ul//*//a[@href]")))

all_links = [listing.get_attribute('href') for listing in listings]

for link in all_links:
    browser.get(link)
    #do whatever else here

Don't forget to add these imports:

from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait

Upvotes: 2

undetected Selenium
undetected Selenium

Reputation: 193108

Instead of storing the WebElements you can store the textContents. Ideally you need to induce WebDriverWait for visibility_of_all_elements_located() and you can use either of the following Locator Strategies:

  • Using CSS_SELECTOR and text attribute:

    driver.get("http://www.mosquedirectory.co.uk/browse/uk/england/london")
    print([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "#directory_listingBrowse h2 a")))])
    
  • Using XPATH and text attribute:

    driver.get("http://www.mosquedirectory.co.uk/browse/uk/england/london")
    print([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[@id='directory_listingBrowse']//h2//a")))])
    
  • Note : You have to add the following imports :

    from selenium.webdriver.support.ui import WebDriverWait
    from selenium.webdriver.common.by import By
    from selenium.webdriver.support import expected_conditions as EC
    
  • Console Output:

    ['Barking and Dagenham (10)', 'Barnet (13)', 'Bexley (2)', 'Brent (26)', 'Bromley (4)', 'Camden (20)', 'City of London (8)', 'Croydon (17)', 'Ealing (20)', 'Enfield (10)', 'Greenwich (7)', 'Hackney (21)', 'Hammersmith and Fulham (14)', 'Haringey (12)', 'Harrow (11)', 'Havering (4)', 'Hertforshire (1)', 'Hillingdon (8)', 'Hounslow (9)', 'Islington (17)', 'Kensington and Chelsea (12)', 'Kingston upon Thames (2)', 'Lambeth (16)', 'Lewisham (3)', 'Loughton (1)', 'Merton (7)', 'Middlesex (21)', 'Newham (59)', 'Redbridge (25)', 'Richmond upon Thames (1)', 'Romford (4)', 'Southwark (13)', 'Sutton (4)', 'Tower Hamlets (76)', 'Waltham Forest (21)', 'Wandsworth (9)', 'Westminster (27)']
    

Upvotes: 0

Related Questions