Reputation: 65
I am currently learning regex and I am trying to filter all links (eg: http://www.link.com/folder/file.html) from a document with notepad++. Actually I want to delete everything else so that in the end only the http links are listed.
So far I tried this : http\:\/\/www\.[a-zA-Z0-9\.\/\-]+
This gives me all links which is find, but how do I delete the remaining stuff so that in the end I have a neat list of all links?
If I try to replace it with nothing followed by \1, obviously the link will be deleted, but I want the exact opposite to have everything else deleted.
So it should be something like: - find a string of numbers, letters and special signs until "http" - delete what you found - and keep searching for more numbers, letters ans special signs after "html" - and delete that again
Any ideas? Thanks so much.
Upvotes: 4
Views: 21346
Reputation: 68
I know my answer won't be RegEx related, but here is another efficient way to get lines containing URLs. This won't remove text around links like Toto mentioned in comments.
At least if there is nice pattern to all links, like https://
.
Mark
https://
Mark to bookmark
.Mark All
.Find
=> Bookmarks
=> Delete all lines without bookmark
.I hope someone who lands here in search of same problem will find my way more user-friendly.
You can still use RegEx to mark lines :)
Upvotes: 0
Reputation: 1
I did this a different way.
Find everything up to the first/next (https or http) (then everything that comes next) up to (html or htm), then output just the '(https or http)(everything next) then (html or htm)' with a line feed/ carriage return after each.
So:
Find: .*?(https:|http:)(.*?)(html|htm)
Replace with: \1\2\3\r\n
Saves looking for all possible (incl non-generic) url matches.
You will need to manually remove any text after the last matched URL.
Can also be used to create url links:
Find: .*?(https:|http:)(.*?)(html|htm)
Replace: <a href="\1\2\3">\1\2\3</a>\r\n
or image links (jpg/jpeg/gif):
Find: .*?(https:|http:)(.*?)(jpeg|jpg|gif)
Replace: <img src="\1\2\3">\r\n
Upvotes: 0
Reputation: 343
The answer made previously by @psxls was a great help for me when I have wanted to perform a similar process.
However, this regex
rule was written six years ago now: accordingly, I had to adjust / complete / update it in order it can properly work with the some recent links, because:
HTTPS
instead of HTTP
protocolwww
as main subdomainI finally reshuffle the search rule to .*?(https?\:\/\/[a-zA-Z0-9[:punct:]]+)
and it worked correctly with the file I had.
Upvotes: 1
Reputation: 6935
In Notepad++, in the Replace menu (CTRL+H) you can do the following:
.*?(http\:\/\/www\.[a-zA-Z0-9\.\/\-]+)
$1\n
Regular expression
and the . matches newline
This will return you with a list of all your links. There are two issues though:
Upvotes: 11
Reputation: 27282
Unfortunately, this seemingly simple task is going to be almost impossible to do in notepad++. The regex you would have to construct would be...horrible. It might not even be possible, but if it is, it's not worth it. I pretty much guarantee that.
However, all is not lost. There are other tools more suitable to this problem.
Really what you want is a tool that can search through an input file and print out a list of regex matches. The UNIX utility "grep" will do just that. Don't be scared off because it's a UNIX utility: you can get it for Windows:
http://gnuwin32.sourceforge.net/packages/grep.htm
The grep command line you'll want to use is this:
grep -o 'http:\/\/www.[a-zA-Z0-9./-]\+\?' <filename(s)>
(Where <filename(s)>
are the name(s) of the files you want to search for URLs in.)
You might want to shake up your regex a little bit, too. The problems I see with that regex are that it doesn't handle URLs without the 'www' subdomain, and it won't handle secure links (which start with https
). Maybe that's what you want, but if not, I would modify it thusly:
grep -o 'https\?:\/\/[a-zA-Z0-9./-]\+\?' <filename(s)>
Here are some things to note about these expressions:
Inside a character group, there's no need to quote metacharacters except for [
and (sometimes) -
. I say sometimes because if you put the dash at the end, as I have above, it's no longer interpreted as a range operator.
The grep utility's syntax, annoyingly, is different than most regex implementations in that most of the metacharacters we're familiar with (?
, +
, etc.) must be escaped to be used, not the other way around. Which is why you see backslashes before the ?
and +
characters above.
Lastly, the repetition metacharacter in this expression (+
) is greedy by default, which could cause problems. I made it lazy by appending a ?
to it. The way you have your URL match formulated, it probably wouldn't have caused problems, but if you change your match to, say [^ ]
instead of [a-zA-Z0-9./-]
, you would see URLs on the same line getting combined together.
Upvotes: 0