Bloxx
Bloxx

Reputation: 1560

Scraping webpage with tabs that do not change url

I am trying to scrape Nasdaq webpage and have some issue with locating elements:

My code:

from selenium import webdriver
import time
import pandas as pd

driver.get('http://www.nasdaqomxnordic.com/shares/microsite?Instrument=CSE32679&symbol=ALK%20B&name=ALK-Abell%C3%B3%20B')

time.sleep(5)
btn_overview = driver.find_element_by_xpath('//*[@id="tabarea"]/section/nav/ul/li[2]/a')
btn_overview.click()
time.sleep(5) 
employees = driver.find_element_by_xpath('//*[@id="CompanyProfile"]/div[6]')

After the last call, I receive the following error:

NoSuchElementException: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="CompanyProfile"]/div[6]"}

Normally the problem would be in wrong 'xpath' but I tried several items, also by 'id'. I suspect that it has something to do with tabs (in my case navigating to "Overview"). Visually the webpage changes, but if for example, I scrape the table, it gets it from the first page:

table_test = pd.read_html(driver.page_source)[0]  

What am I missing or doing wrong?

Upvotes: 0

Views: 210

Answers (2)

chitown88
chitown88

Reputation: 28565

You sure you need Selenium?

import requests
from bs4 import BeautifulSoup

url = 'http://lt.morningstar.com/gj8uge2g9k/stockreport/default.aspx'
payload = {
'SecurityToken': '0P0000A5LL]3]1]E0EXG$XCSE_3060'}

response = requests.get(url, params=payload)
soup = BeautifulSoup(response.text, 'html.parser')

employees = soup.find('h3', text='Employees').next_sibling.text
print(employees)

Output:

2,537

Upvotes: 1

Md. Fazlul Hoque
Md. Fazlul Hoque

Reputation: 16187

The overview page is under iframe

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager

from selenium.webdriver.chrome.options import Options

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC


option = webdriver.ChromeOptions()
option.add_argument("start-maximized")

#chrome to stay open
option.add_experimental_option("detach", True)

driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=option)
driver.get('http://www.nasdaqomxnordic.com/shares/microsite?Instrument=CSE32679&symbol=ALK%20B&name=ALK-Abell%C3%B3%20B')


WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="tabarea"]/section/nav/ul/li[2]/a'))).click()
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="cookieConsentOK"]'))).click()

WebDriverWait(driver, 20).until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe#MorningstarIFrame")))
employees=WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.XPATH, '//*[@id="CompanyProfile"]/div[6]'))).text.split()[1]
print(employees)

Output:

2,537

webdriverManager

Upvotes: 1

Related Questions