user1419243
user1419243

Reputation: 1705

Iterate through all links of a website using Selenium

I'm new to Selenium and I would like to download all the pdf, ppt(x) and doc(x) files from a website. I have written the following code. But I'm confused how to get the inner links:

import java.io.*;
import java.util.ArrayList;
import java.util.List;

import org.apache.commons.io.FileUtils;
import org.openqa.selenium.By;
import org.openqa.selenium.OutputType;
import org.openqa.selenium.TakesScreenshot;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;

public class WebScraper {

    String loginPage = "https://blablah/login";
    static String userName = "11";
    static String password = "11";
   static String mainPage = "https://blahblah";

    public WebDriver driver = new FirefoxDriver();
    ArrayList<String> visitedLinks = new ArrayList<>();

    public static void main(String[] args) throws IOException {

        System.setProperty("webdriver.gecko.driver", "E:\\geckodriver.exe");

        WebScraper webSrcaper = new WebScraper();
        webSrcaper.openTestSite();
        webSrcaper.login(userName, password);

        webSrcaper.getText(mainPage);
        webSrcaper.saveScreenshot();
        webSrcaper.closeBrowser();
    }

        /**
     * Open the test website.
     */
    public void openTestSite() {

        driver.navigate().to(loginPage);
    }

    /**
     * @param username
     * @param Password Logins into the website, by entering provided username and password
     */
    public void login(String username, String Password) {

        WebElement userName_editbox = driver.findElement(By.id("IDToken1"));
        WebElement password_editbox = driver.findElement(By.id("IDToken2"));
        WebElement submit_button = driver.findElement(By.name("Login.Submit"));

        userName_editbox.sendKeys(username);
        password_editbox.sendKeys(Password);
        submit_button.click();

    }

    /**
     * grabs the status text and saves that into status.txt file
     *
     * @throws IOException
     */
    public void getText(String website) throws IOException {

        driver.navigate().to(website);

        try {
            Thread.sleep(10000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

        List<WebElement> allLinks = driver.findElements(By.tagName("a"));

        System.out.println("Total no of links Available: " + allLinks.size());

        for (int i = 0; i < allLinks.size(); i++) {

            String fileAddress = allLinks.get(i).getAttribute("href");

            System.out.println(allLinks.get(i).getAttribute("href"));
            if (fileAddress.contains("download")) {
                driver.get(fileAddress);
            } else {
//                getText(allLinks.get(i).getAttribute("href"));
            }
        }
        
    }

    /**
     * Saves the screenshot
     *
     * @throws IOException
     */
    public void saveScreenshot() throws IOException {
        File scrFile = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
        FileUtils.copyFile(scrFile, new File("screenshot.png"));
    }

    public void closeBrowser() {
        driver.close();
    }
    
}

I have an if clause which checks if the current link is a downloadable file (with an address including the word "download"). If it is, I will get it, if not, what to do? That part is my problem. I tried to implement a recursive function to retrieve the nested links and repeat the steps for the nested links, but no success.

In the meantime, the first link which is found when giving https://blahblah as the input, is https://blahblah/# which refers to the same page as https://blahblah. It can also cause a problem, but currently, I'm trapped in another problem, namely the implementation of the recursion function. Could you please help me?

Upvotes: 1

Views: 2755

Answers (2)

user1207289
user1207289

Reputation: 3253

One option is to embed groovy in your java code if you want to search depth-first. When httpBuilder parses , it gives xml like documentation and then you can traverse as deep as you like using gpath in groovy. Your test.groovy is like below

@Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7' )

import groovyx.net.http.HTTPBuilder
import static groovyx.net.http.Method.GET
import static groovyx.net.http.ContentType.JSON
import groovy.json.*
import org.cyberneko.html.parsers.SAXParser
import groovy.util.XmlSlurper
import groovy.json.JsonSlurper

urlValue="http://yoururl.com"

def http = new HTTPBuilder(urlValue) 

//parses page and provide xml tree , it even includes malformed html 
def parsedText = http.get([:])

// number of a tags. "**" will parse depth-first
aCount= parsedText."**".findAll {it.name()=='a'}.size()

Then you just call test.groovy from java like this

 static void runWithGroovyShell() throws Exception {
    new GroovyShell().parse( new File( "test.groovy" ) ).invokeMethod( "hello_world", null ) ;
  }

More info on parsing html with groovy

Addition: When you evaluate groovy within Java, to access groovy variables in Java environment through groovy bindings, have a look here

Upvotes: 0

aolisa
aolisa

Reputation: 41

You are not far off, but answering your question, grab all the link into a list of elements, iterate and click(and wait). Using C# something like this;

       IList<IWebElement> listOfLinks = _driver.FindElements(By.XPath("//a"));
        foreach (var link in listOfLinks)
        {
            if(link.GetAttribute("href").Contains("download"))
            {
            link.Click();
            WaitForSecs(); //Thread.Sleep(1000)
            }
        }

JAVA

    List<WebElement> listOfLinks = webDriver.findElements(By.xpath("//a"));
    for (WebElement link :listOfLinks ) {

        if(link.getAttribute("href").contains("download"))
        {
            link.click();
            //WaitForSecs(); //Thread.Sleep(1000)
        }
    }

Upvotes: 1

Related Questions