Reputation: 1191
I originally asked this question: Regular Expression in gVim to Remove Duplicate Domains from a List
However, I realize I may be more likely to find a working solution if I "broaden my scope" in terms of what solution I'm willing to accept.
So, I'll rephrase my question & maybe I'll get a better solution...here goes:
I have a large list of URLs in a .txt file (I'm running Windows Vista 32bit) and I need to remove duplicate DOMAINS (and the entire corresponding URL to each duplicate) while leaving behind the first occurrence of each domain. There are roughly 6,000,000 URLs in this particular file, in the following format (the URLs obviously don't have a space in them, I just had to do that because I don't have enough posts here to post that many "live" URLs):
http://www.exampleurl.com/something.php http://exampleurl.com/somethingelse.htm http://exampleurl2.com/another-url http://www.exampleurl2.com/a-url.htm http://exampleurl2.com/yet-another-url.html http://exampleurl.com/ http://www.exampleurl3.com/here_is_a_url http://www.exampleurl5.com/something
Whatever the solution is, the output file using the above as the input, should be this:
http://www.exampleurl.com/something.php http://exampleurl2.com/another-url http://www.exampleurl3.com/here_is_a_url http://www.exampleurl5.com/something
You notice there are no duplicate domains now, and it left behind the first occurrence it came across.
If anybody can help me out, whether it be using regular expressions or some program I'm not aware of, that would be great.
I'll say this though, I have NO experience using anything other than a Windows OS, so a solution entailing something other than a windows program, would take a little "baby stepping" so to speak (if anybody is kind enough to do so).
Upvotes: 3
Views: 4761
Reputation: 959
It's the most efficient Python solution to filter duplicate domains from large text files.
from urllib.parse import urlparse
urls_file = open('domains.txt', encoding='utf-8', errors='ignore')
all_urls = urls_file.read().splitlines()
print('all urls count = ', len(all_urls))
unique_urls = set()
unique_root_domains = set()
for url in all_urls:
root_domain = urlparse(url).hostname
if root_domain not in unique_root_domains:
unique_root_domains.add(root_domain)
unique_urls.add(url)
with open('unique-urls.txt', 'w') as unique_urls_file:
unique_urls_file.write('\n'.join(unique_urls) + '\n')
print('all unique urls count = ', len(unique_urls))
Upvotes: 0
Reputation: 9162
Regular expressions in Python, very raw and does not work with subdomains. The basic concept is to use dictionary keys and values, key will be domain name, and value will be overwritten if the key already exists.
import re
pattern = re.compile(r'(http://?)(w*)(\.*)(\w*)(\.)(\w*)')
urlsFile = open("urlsin.txt", "r")
outFile = open("outurls.txt", "w")
urlsDict = {}
for linein in urlsFile.readlines():
match = pattern.search(linein)
url = match.groups()
domain = url[3]
urlsDict[domain] = linein
outFile.write("".join(urlsDict.values()))
urlsFile.close()
outFile.close()
You can extend it to filter out subdomains, but the basic idea is there I think. And for 6 million URLs might take quite a while in Python...
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. −−Jamie Zawinski, in comp.emacs.xemacs
Upvotes: 2
Reputation: 3631
I would use a combination of Perl and regexps. My first version i
use warnings ;
use strict ;
my %seen ;
while (<>) {
if ( m{ // ( .*? ) / }x ) {
my $dom = $1 ;
print unless $seen {$dom} ++ ;
print "$dom\n" ;
} else {
print "Unrecognised line: $_" ;
}
}
But this treats www.exampleurl.com and exampleurl.com as different. My 2nd version has
if ( m{ // (?:www\.)? ( .*? ) / }x )
which ignores "www." at front. You could probably refine the regexp a bit, but that is left to the reader.
Finally you could comment the regexp a bit ( the /x
qualifier allows this). It rather depends on who is going to be reading it - it could be regarded as too verbose.
if ( m{
// # match double slash
(?:www\.)? # ignore www
( # start capture
.*? # anything but not greedy
) # end capture
/ # match /
}x ) {
I use m{}
rather than //
to avoid /\/\/
Upvotes: 1
Reputation: 112386
And ça va, you have the dups together. Use perhaps use uniq(1) to find dublicates.
(Extra credit: why can't a regular expression alone do this? Computer science students should think about the pumping lemmas.)
Upvotes: 0
Reputation: 755141
For this particular situation I would not use a Regex. URL's are a well defined format and there exist an easy to use parser for that format in the BCL: The Uri
type. It can be used to easily parse the type and get out the domain information you seek.
Here is a quick example
public List<string> GetUrlWithUniqueDomain(string file) {
using ( var reader = new StreamReader(file) ) {
var list = new List<string>();
var found = new HashSet<string>();
var line = reader.ReadLine();
while (line != null) {
Uri uri;
if ( Uri.TryCreate(line, UriKind.Absolute, out uri) && found.Add(uri.Host)) {
list.Add(line);
}
line = reader.ReadLine();
}
}
return list;
}
Upvotes: 1