Invice
Invice

Reputation: 13

Nutch fetching timeout

I am trying to crawl certain sites with nutch-1.12 but fetching does not work properly for some sites on the seed list:

http://www.nature.com/ (1)
https://www.theguardian.com/international (2)
http://www.geomar.de (3)

As you can see in the logs below (2) and (3) work fine while fetching (1) results in a timeout while the link on its own works fine in a browser. As I don't want to increase the waiting time and attempts drastically I am wondering if there is another way to determine why this timeout is generated and how to fix it.


Log

Injector: starting at 2017-02-27 18:33:38
Injector: crawlDb: nature_crawl/crawldb
Injector: urlDir: urls-2
Injector: Converting injected urls to crawl db entries.
Injector: overwrite: false
Injector: update: false
Injector: Total urls rejected by filters: 0
Injector: Total urls injected after normalization and filtering: 3
Injector: Total urls injected but already in CrawlDb: 0
Injector: Total new urls injected: 3
Injector: finished at 2017-02-27 18:33:42, elapsed: 00:00:03
Generator: starting at 2017-02-27 18:33:45
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: running in local mode, generating exactly one partition.
Generator: Partitioning selected urls for politeness.
Generator: segment: nature_crawl/segments/20170227183349
Generator: finished at 2017-02-27 18:33:51, elapsed: 00:00:05
Fetcher: starting at 2017-02-27 18:33:53
Fetcher: segment: nature_crawl/segments/20170227183349
Fetcher: threads: 3
Fetcher: time-out divisor: 2
QueueFeeder finished: total 3 records + hit by time limit :0
Using queue mode : byHost
Using queue mode : byHost
fetching https://www.theguardian.com/international (queue crawl delay=1000ms)
Using queue mode : byHost
fetching http://www.nature.com/ (queue crawl delay=1000ms)
Fetcher: throughput threshold: -1
Fetcher: throughput threshold retries: 5
fetching http://www.geomar.de/ (queue crawl delay=1000ms)
robots.txt whitelist not configured.
robots.txt whitelist not configured.
robots.txt whitelist not configured.
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=2
-activeThreads=2, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=2
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=1
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
.
.
.
-activeThreads=1, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=1
fetch of http://www.nature.com/ failed with: java.net.SocketTimeoutException: Read timed out
Thread FetcherThread has no more work available
-finishing thread FetcherThread, activeThreads=0
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0, fetchQueues.getQueueCount=0
-activeThreads=0
Fetcher: finished at 2017-02-27 18:34:18, elapsed: 00:00:24
ParseSegment: starting at 2017-02-27 18:34:21
ParseSegment: segment: nature_crawl/segments/20170227183349
Parsed (507ms):http://www.geomar.de/
Parsed (344ms):https://www.theguardian.com/international
ParseSegment: finished at 2017-02-27 18:34:24, elapsed: 00:00:03
CrawlDb update: starting at 2017-02-27 18:34:26
CrawlDb update: db: nature_crawl/crawldb
CrawlDb update: segments: [nature_crawl/segments/20170227183349]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: false
CrawlDb update: URL filtering: false
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2017-02-27 18:34:30, elapsed: 00:00:03

Upvotes: 1

Views: 1199

Answers (2)

Torukmakto
Torukmakto

Reputation: 350

You can either try by increasing http timeout settings in your nutch-site.xml

<property>
  <name>http.timeout</name>
  <value>30000</value>
  <description>The default network timeout, in milliseconds.</description>
</property>

Otherwise, check if the robots.txt of that site is allowing the crawling of it's pages.

Upvotes: 2

Sebastian Nagel
Sebastian Nagel

Reputation: 2239

Don't know why but looks like www.nature.com keeps the connection hanging if the user-agent string contains "Nutch". Reproducible also using wget:

wget -U 'my-test-crawler/Nutch-1.13-SNAPSHOT (mydotmailatexampledotcom)' -d http://www.nature.com/

Upvotes: 0

Related Questions