Noah R
Noah R

Reputation: 5477

How would I look for all URLs on a web page and then save them to a individual variables with urllib2 In Python?

How would I look for all URLs on a web page and then save them to individual variables with urllib2 In Python?

Upvotes: 0

Views: 110

Answers (3)

Insomaniacal
Insomaniacal

Reputation: 1997

You could simply download the raw html with urllib2, then simply search through it. There might be easier ways but you could do this:

1:Download the source code.
2:Use strings library to split it into a list.
3:Search the first 7 characters of each section-->
4:If the first 7 characters are http://, write that to a variable.

Why do you need separate variables though? Wouldn't it be easier to save them all to a list, using list.append(URL_YOU_JUST_FOUND), every time you find another url?

Upvotes: 0

Senthil Kumaran
Senthil Kumaran

Reputation: 56823

You don't do it with urllib2 alone. What are you looking for is parsing urls in a web page. You get your first page using urllib2, read its contents and then pass it through parser like Beautifulsoup or as the other poster explained, you can regex to search the contents of the page too.

Upvotes: 0

moinudin
moinudin

Reputation: 138347

Parse the html with an html parser and find all (e.g. using Beutiful Soup's findAll() method) <a> tags and check their href attributes.

If, however, you want to find all URLs in the page even if they aren't hyperlinks, then you can use a regular expression which could be anything from simple to ridiculously insane.

Upvotes: 1

Related Questions