snoofkin
snoofkin

Reputation: 8895

Webpage updates detection algorithms

First of all, I'm not looking for code, just a plain discussion about approaches regarding what the subject says.

I was wondering lately on how really the best way to detect (as fast as possible) changes to website pages, assuming I have 100K websites, each has an unknown amount of pages, do a crawler really needs to visit each and every one of them once in a while?

Upvotes: 2

Views: 144

Answers (1)

hackartist
hackartist

Reputation: 5264

Unless they have RSS feeds (which you would still need to pull to see if they have changed) there really isn't anyway to find out when the site has changed except by going to it and checking. However you can do some smart things to be more efficient. After you have been checking on the site for a while you can build a prediction model of when they tend to update. For example: this news site updates every 2-3 hours but that blog only makes about a post a week. This can save you many checks because the majority of pages don't actually update that often. Google does this to help with its pulling. One simple algorithm which will work for this (depending on how cutting edge you need your news to be) is the following of my own design based on binary search:

Start each site off with a time interval ~ 1 day
Visit the sites when that time hits and check changes
if something has changed
    halve the time for that site
else
    double the time for that site
If after many iterations you find it hovering around 2-3 numbers 
    fix the time on the greater of the numbers

Now this is a simple algorithm for finding which times are right for checking but you can probably do something more effective if you parse the text and see patterns in times when the updates were actually posted.

Upvotes: 1

Related Questions