Twipped
Twipped

Reputation: 1179

What is the best method for testing URLs against a blacklist in PHP

I have a script that is scraping URLs from various sources, resulting in a rather large list. Currently I've just got a collection of if statements that I'm using to filter out sites I don't want. This obviously isn't maintainable, so I'm trying to find a fast and powerful solution for filtering against a blacklist of url masks.

The best thing I could come up with is looping through an array of regex patterns and filtering anything that matches. Is this really my best bet or is there another method that would do the job better?

Upvotes: 0

Views: 783

Answers (4)

Byron Whitlock
Byron Whitlock

Reputation: 53871

You should keep the sites in an hash and look up like that. it is simple and elegant:

    $excluded['www.google.com'] = true;
    $excluded['www.mapquest.com'] = true;
    $excluded['www.yahoo.com'] = true;

    $url = "http://www.google.com?q=barefoot+winery";

    $urlArray = parse_url($url)

    if (! isset($excluded[$urlArray['host']]))
    {
        scrape($url)
    }

As pascal said after a while you will run into memory problems. But at that point maintaining the urls will be a bigger issue. Go for a database when that happens.

Upvotes: 1

Pascal MARTIN
Pascal MARTIN

Reputation: 401022

If you want to exclude domain names, or some URL that has no "variable part", a solution might be to use a database, with a table containing only the URL, with the right index, and do a quick match.

Finding out if an URL must not be dealt with would then only be a matter or doing a quick query to that DB (which generally means "URL equals", or "URL starts with") -- which can be as simple as an SQLite DB, which fits in a file and doesn't require an additionnal server.


The idea of a PHP array has one drawback : when your array will get bigger, it'll take more and more memory just to have it in memory -- and, one day or another, you'll take too much memory and will hit memory_limit ; if you have more than a couple thousands URLs, that solution might not be the best one.

Still, if you only have a couple of URLs or patterns, the idea of a PHP array, looping over it, and comparing each value with strpos (for "contains" or "starts with") or preg_match (for regex) will do just fine -- and is the easiest one to implement.


If you want to use some complex matching rule, using some kind of regex will probably be your only real way... Be it on the PHP side, with preg_match, or on a SQL server (MySQL, for instance, has support for regex, as far as I know -- no idea about the performances, though ; see 11.4.2. Regular Expressions for more informations)

Upvotes: 3

Cem Kalyoncu
Cem Kalyoncu

Reputation: 14603

Will you be loading a long list of items to memory each time? I think egrep or grep will be best method. On Linux your file will remain in file cache and results will be very fast and since egrep will run through file, not every apache thread will have the copy of the list in memory.

Upvotes: 0

Jani Hartikainen
Jani Hartikainen

Reputation: 43243

If you need to be able to specify patterns, then looping through an array of regexes is probably fine.

If you only need to see exact matches and no patterns, you can use strpos or such to just do a straight string match, which should be somewhat faster.

Upvotes: 0

Related Questions