Reputation: 404
I have list of URLs and i need to scrape data from them. The website refusing connection when opening each url in new driver, so i decided to open each url in new tab(the website allowing this way). Below code i am using
from selenium import webdriver
import time
from lxml import html
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
file = open('f:\\listofurls.txt', 'r')
for aa in file:
aa = aa.strip()
driver.execute_script("window.open('{}');".format(aa))
soup = html.fromstring(driver.page_source)
name = soup.xpath('//div[@class="name"]//text()')
title = soup.xpath('//div[@class="title"]//text()')
print(name, title)
time.sleep(3)
But the problem is all URLs are opening at a time instead of one after one.
Upvotes: 0
Views: 694
Reputation: 1064
You can try this code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from lxml import html
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
file = open('f:\\listofurls.txt', 'r')
for aa in file:
#open tab
driver.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't')
# You can use (Keys.CONTROL + 't') on other OSs
# Load a page
driver.get(aa)
# Make the tests...
soup = html.fromstring(driver.page_source)
name = soup.xpath('//div[@class="name"]//text()')
title = soup.xpath('//div[@class="title"]//text()')
print(name, title)
time.sleep(3)
driver.close()
Upvotes: 1
Reputation: 470
I think you have to strip before the loop like this:
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
file = open('f:\\listofurls.txt', 'r')
aa = file.strip()
for i in aa:
driver.execute_script("window.open('{}');".format(i))
soup = html.fromstring(driver.page_source)
name = soup.xpath('//div[@class="name"]//text()')
title = soup.xpath('//div[@class="title"]//text()')
print(name, title)
time.sleep(3)
Upvotes: 0