Reputation: 629
I'm using Capybara to navigate through a login on a website and then download a few files (I'm automating a frequent process that I have to do). There's a few things I tried that aren't working and I'm hoping someone might know a solution...
I have the two links I'm executing .click on, but while one file will start downloading (this is using the Chrome Selenium driver), capybara seems to stop functioning after that. Running .click on the other link doesn't do anything... I figured it's because it's not technically on the page anymore (since it followed a download link) but I tried revisiting the page to click the second link and that doesn't work either.
Assuming I can get that working, I'd really like to be able to download to my script location rather than my Downloads folder, but I've tried every profile configuration I've found online and nothing seems to change it.
Because of the first two issues, I decided to try wget... but I would need to continue from the session in capybara to authenticate. Is it possible to pull the session data (just cookies?) from capybara and insert it into a wget or curl command?
Thanks!
Upvotes: 0
Views: 737
Reputation: 49960
For #3 - accessing the cookies is driver dependent - in selenium it's
page.driver.browser.manage.all_cookies
or you can use the https://github.com/nruth/show_me_the_cookies gem which normalizes access across most of Capybaras drivers. With those cookies you can write them out to a file and then use the --load-cookies
option of wget (--cookie
option in curl)
For #1 you'd need to provide more info about any errors you get, what is current_url, what does "doesn't work" actually mean, etc
Upvotes: 0