Reputation: 199
internet.find(:xpath, '/html/body/div[1]/div[10]/div[2]/div[2]/div[1]/div[1]/div[1]/div[5]/div/div[2]/a').text
I am looping through a series of pages and sometimes this xpath will not be available. How do I continue to the next url instead of throwing an error and stopping the program? Thanks.
Upvotes: 2
Views: 313
Reputation: 11
As an alternative to accepted answer you could consider #first
method that accepts count
argument as the number of expected matches or null
to allow empty results as well
internet.first(:xpath, ..., count: nil)&.text
That returns element's text if one's found and nil
otherwise. And there's no need to ignore rescued exception
See Capybara docs
Upvotes: 1
Reputation: 49890
First, stop using xpaths like that - they're going to be ultra-fragile and horrendous to read. Without seeing the HTML I can't give you a better one - but at least part way along there has to be an element with an id
you can target instead.
Next, you could catch the exception returned from find
and ignore it or better yet you could check if page has the element first
if internet.has_xpath?(...)
internet.find(:xpath, ...).text
...
else
... whatever you want to do when it's not on the page
end
Upvotes: 2