HStinnett
HStinnett

Reputation: 21

Automated Search TimeOut Error when Scraping

I am using Python to scrape the data from a specific table and save it into a file that will be filled with the same table from multiple webpages (compounds). However, I'm having difficulties identifying the appropriate table with BeautifulSoup. Here is the relevant HTML code:

Table Identifier HTML from Website

Here is the relevant portion of my code:

url2="https://chem.nlm.nih.gov/chemidplus/rn/50-00-0"
r=requests.get(url2)
html=r.content

soup=BeautifulSoup(html,'lxml')
print(soup.prettify())

Gives me an HTML that has only the script: "Automated searches: max 1 every 3 seconds. Reloading in 1. setTimeout(function(){location.reload(true);},1100);"

I believe this is the error in my code, but a websearch turned up no explanation for why this showed up, or how to fix it. *UPDATE/CONCLUSION: I added driver.implicitly_wait(3) after page loading and after identifying the table to slow down the program. The error has not been replicated.

Upvotes: 0

Views: 519

Answers (1)

HStinnett
HStinnett

Reputation: 21

UPDATE/CONCLUSION: I added driver.implicitly_wait(3) after page loading and after identifying the table to slow down the program. The error has not been replicated.

Upvotes: 1

Related Questions