Reputation: 129
Hello Everyone I am making a web application that crawl lots of pages from a specific website, I started my crawler4j software with unlimited depth and pages but suddenly it stopped because of internet connection. Now I want to continue crawling that website and not to fetch the urls I visited before considering I have last pages depth.
Note : I want some way that not to check my stored url with the urls I will fetch because I don't want to send very much requests to this site.
**Thanks **☺
Upvotes: 1
Views: 203