Reputation: 18166
I'd like to use Selenium to get the HTML of a page after a link has been clicked. Normally, I would just download the link I want to click, but in this case when the link is clicked it fires off some obfuscated Javascript, which loads data back into the DOM of the current page. It's pretty nasty.
So, here's what I expected to work. This loads the page, finds and clicks the link I need, then returns the DOM as text using outerHTML
from JavaScript:
from selenium import webdriver
def get_html_after_click(i):
'''Loads a page, then clicks an element, and returns the HTML'''
browser = webdriver.Firefox()
browser.get('http://www.sdjudicial.com/sc/scopinions.aspx')
elem = browser.find_elements_by_class_name('igeb_ItemLabel')[i]
elem.click()
js = '''html = document.getElementsByTagName('html')[0];
return html.outerHTML;'''
html = browser.execute_script(js)
browser.quit()
return html
Except whenever I run this, the HTML I get back is the same as if I had done browser.page_source
-- even though I've clicked the link and grabbed the DOM using JavaScript.
I'm new to Selenium. What am I missing?
Upvotes: 1
Views: 3567
Reputation: 8548
You probably are doing it really quickly.
After you click on the element, wait
for an expected element that shows up because of the click and then do
browser.page_source
or execute your java script
Upvotes: 2