Reputation: 21
Trying to get all the (5) tables from this url.
The drop box for the individual pages I can populate with type(value)
but this does not refresh the page. Stepping through the pages with the nextPage button fails after one because the object is not longer attached to the DOM (and I don't know how to get around that in splinter).
Trying to populate the drop down and then selecting it. This returns this error:
Traceback (most recent call last):
File "<stdin>", line 69, in <module>
File "/usr/local/lib/python2.6/dist-packages/splinter/driver/webdriver/__init__.py", line 334, in select
self.find_by_xpath('//select[@name="%s"]/option[@value="%s"]' % (self["name"], value))._element.click()
File "/usr/local/lib/python2.6/dist-packages/splinter/element_list.py", line 73, in __getattr__
self.__class__.__name__, name))
AttributeError: 'ElementList' object has no attribute '_element'
I used the code below. Any help most appreciated!
from splinter import Browser
from lxml.html import parse
from StringIO import StringIO
from time import sleep
url = r'http://www.molpower.com//VLCWeb/UIAboutMOL/PortScheduleInfo.aspx?pPort=NLRTMDE&pFromDate=01-Oct-2013&pToDate=10-Oct-2013'
def _unpack(row, kind = 'td'):
elts = row.findall('.//%s' %kind)
return [val.text_content() for val in elts[0:7]]
def parse_schdls_data(table):
rows = table.findall('.//tr')
hdrs = _unpack(rows[0], kind = 'th')
data = [_unpack(r, kind = 'td') for ir, r in enumerate(rows[1:-1]) if ir % 3 == 0]
return (hdrs, data)
with Browser() as browser:
browser.visit(url)
print browser.url
pages = browser.find_by_tag('option')
pagevals = [p.value for p in pages]
maxpagev = max(pagevals)
inputs = browser.find_by_tag('input')
'''
for ip, inp in enumerate(inputs):
if inp.has_class('btnMRBPageNext'):
#print ip, inp.value, inp.text
#Need input 35 for the nextPage
inp.click()
'''
selects = browser.find_by_tag('select')
for ns, sel in enumerate(selects):
if sel.has_class('inputDropDown'):
print ns, sel.value, sel.text
sel.type(sel.value)
sleep(2)
moldata = list()
for page in range(len(pagevals)):
content = browser.html
parsed = parse(StringIO(content))
doc = parsed.getroot()
tables = doc.findall('.//table')
schdls = tables[91]
#Get all rows from that table
rows = schdls.findall('.//tr')
hdr, data = parse_schdls_data(schdls)
#print page, data
moldata.append(data)
while browser.is_element_not_present_by_tag('select', wait_time = 2):
pass
inputs = browser.find_by_tag('input')
selects = browser.find_by_tag('select')
#inputs[35].click()
#selects[0].type(str(page + 1))
selects[0].select(selects[0].value)
Upvotes: 2
Views: 2206
Reputation: 1079
Although you got the reference to the select elements by calling
selects = browser.find_by_tag('select')
and then call the select method on this elements
selects[0].select(selects[0].value)
But from the traceback we can know, the splinter converts it to find_by_xpath
self.find_by_xpath('//select[@name="%s"]/option[@value="%s"]' % (self["name"], value))._element.click()
Where it assumes the select
element has a name, if there is no name for the select, there will be an error.
<html>
<head>
</head>
<body>
<select id='s1'>
<option value='a'> A </option>
<option value='b'> B </option>
<option value='c'> C </option>
</select>
<select id='s2'>
<option value='a'> A </option>
<option value='b'> B </option>
<option value='c'> C </option>
</select>
</body>
</html>
We can reproduce this error with previous HTML page, and after we add names to the 2 select
elements, errors gone.
Upvotes: 0
Reputation: 13301
Because the select element do not have a name, so for example:
browser.find_by_xpath('//select[@id="MRBgvPortScheduleInformation_edlPager"]/option[@value="2"]')._element.click()
Upvotes: 1