Александр
Александр

Reputation: 1

Web crawling with breadth but not depth

I'm making my first web crawler using java and jsoup. I found that piece of code that works, but not as i want. Problem is that it focuses on depth of links, but I wanna crawl pages on breadth. Spend some time trying to rework code focusing on breadth, but it still goes too deep starting from first link. Any ideas of how can i do breadth crawling?

public class WebCrawlerWithDepth {
    private static final int MAX_DEPTH = 4;
    private HashSet<String> links;

    public WebCrawlerWithDepth() {
        links = new HashSet<>();
    }

    public void getPageLinks(String URL, int depth) {
        if ((!links.contains(URL) && (depth < MAX_DEPTH))) {
            System.out.println("Depth: " + depth + " " + URL);
                links.add(URL);

                Document document = Jsoup.connect(URL).get();
                Elements linksOnPage = document.select("a[href]");

                depth++;
                for (Element page : linksOnPage) {
                    getPageLinks(page.attr("abs:href"), depth);
               }
           }
       }
  

Upvotes: 0

Views: 196

Answers (2)

Sean Patrick Floyd
Sean Patrick Floyd

Reputation: 298908

Basically the same way you from depth-first to breadth-first in algorithmic coding, you need a queue.

Add every link you've extracted to the queue, and retrieve new pages to be crawled from that queue.

Here's my take on your code:

public class WebCrawlerWithDepth {

    private static final int MAX_DEPTH = 4;
    private Set<String> visitedLinks;
    private Queue<Link> remainingLinks;

    public WebCrawlerWithDepth() {
        visitedLinks = new HashSet<>();
        remainingLinks = new LinkedList<>();
    }

    public void getPageLinks(String url, int depth) throws IOException {
        remainingLinks.add(new Link(url, 0));
        int maxDepth = Math.max(1, Math.min(depth, MAX_DEPTH));
        processLinks(maxDepth);
    }

    private void processLinks(final int maxDepth) throws IOException {
        while (!remainingLinks.isEmpty()) {
            Link link = remainingLinks.poll();
            int depth = link.level;
            if (depth < maxDepth) {
                Document document = Jsoup.connect(link.url).get();
                Elements linksOnPage = document.select("a[href]");
                for (Element page : linksOnPage) {
                    String href = page.attr("href");
                    if (visitedLinks.add(href)) {
                        remainingLinks.offer(new Link(href, depth + 1));
                    }
                }
            }
        }
    }

    static class Link {

        final String url;
        final int level;

        Link(final String url, final int level) {
            this.url = url;
            this.level = level;
        }
    }
}

Upvotes: 1

Code-Apprentice
Code-Apprentice

Reputation: 83537

Instead of iterating directly on the links in the current page, you need to store them in a Queue. This should store all the links to visit from all pages. Then you get the next link from the Queue to visit.

Upvotes: 0

Related Questions