Reputation: 1
I am currently working on a web scraping project using Python and Selenium, and I'm encountering difficulties with a particular webpage that seems to be heavily reliant on JavaScript. When I inspect the page source, I can't find any HTML code to search within. Consequently, when I use the find_element method in Selenium, it raises a NoSuchElementException.
I have attempted various approaches, including trying different attributes such as CSS selectors and class names, but none have been successful in locating the elements I need.
browser = webdriver.Chrome(chrome().install()) browser.get('https://my.te.eg/user/login') browser.find_element(By.ID,"login-service-number-et")
i tried use mechanicalsoup library and also deosnt work, howver when i inscpect an element in a page it show me the html code, but when i open 'opent page source' it is full of java script code and the webdriver object always get the javascript code
Upvotes: -1
Views: 110
Reputation: 66
It is due to the page not fully loading when you're searching for the element which causes the NoSuchElementException, You'll have to wait till that resource is visible, have a look at webdriver wait but essentially for example:
# creates a waiting object with the amount of time it should wait before returning and error message
WebDriverWait wait = new WebDriverWait(webDriver, timeoutInSeconds);
#this essentially says wait till the element to be located is visible i.e if you're waiting on a button to load, it'll wait till its visible before whatever you want to do
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id<locator>));
Thats just a high level view but expected conditions has different use cases for what you'd like to do
Documentation on wait stratergies : https://www.selenium.dev/documentation/webdriver/waits/
Upvotes: 0