Ritu Bhandari
Ritu Bhandari

Reputation: 301

Parsing Robots.txt in python

I want to parse robots.txt file in python. I have explored robotParser and robotExclusionParser but nothing really satisfy my criteria. I want to fetch all the diallowedUrls and allowedUrls in a single shot rather then manually checking for each url if it is allowed or not. Is there any library to do this?

Upvotes: 2

Views: 13970

Answers (4)

J. Doe
J. Doe

Reputation: 3634

Why do you have to check your URLs manually? You can use urllib.robotparser in Python 3, and do something like this:

import urllib.robotparser as urobot
import urllib.request
from bs4 import BeautifulSoup


url = "example.com"
rp = urobot.RobotFileParser()
rp.set_url(url + "/robots.txt")
rp.read()
if rp.can_fetch("*", url):
    site = urllib.request.urlopen(url)
    sauce = site.read()
    soup = BeautifulSoup(sauce, "html.parser")
    actual_url = site.geturl()[:site.geturl().rfind('/')]
    
    my_list = soup.find_all("a", href=True)
    for i in my_list:
        # rather than != "#" you can control your list before loop over it
        if i != "#":
            newurl = str(actual_url)+"/"+str(i)
            try:
                if rp.can_fetch("*", newurl):
                    site = urllib.request.urlopen(newurl)
                    # do what you want on each authorized webpage
            except:
                pass
else:
    print("cannot scrape")

Upvotes: 9

Tejas Tank
Tejas Tank

Reputation: 1216

I like to share smallest code.

sitemap_urls = re.findall(r'[sS][iI][tT][eE][mM][aA][pP]:\s*(.*?)\s*', response.text, re.IGNORECASE)
print("sitemap_urls", sitemap_urls)

Pretty easy to extract, where sitemap might in capital or small or proper case.

Please test and share your valuable feedback

Upvotes: 0

socrates
socrates

Reputation: 1323

Actually, RobotFileParser can do the job, consider the following code

def iterate_rules(robots_content):
    rfp = RobotFileParser()
    rfp.parse(robots_content.splitlines())
    entries = [rfp.default_entry, *rfp.entries]\
              if rfp.default_entry else rfp.entries
    for entry in entries:
        for ruleline in entry.rulelines:
            yield (entry.useragents, ruleline.path, ruleline.allowance)

from my post on medium

Upvotes: 0

Yaman Jain
Yaman Jain

Reputation: 1251

You can use curl command to read the robots.txt file into a single string split it with new line check for allow and disallow urls.

import os
result = os.popen("curl https://fortune.com/robots.txt").read()
result_data_set = {"Disallowed":[], "Allowed":[]}

for line in result.split("\n"):
    if line.startswith('Allow'):    # this is for allowed url
        result_data_set["Allowed"].append(line.split(': ')[1].split(' ')[0])    # to neglect the comments or other junk info
    elif line.startswith('Disallow'):    # this is for disallowed url
        result_data_set["Disallowed"].append(line.split(': ')[1].split(' ')[0])    # to neglect the comments or other junk info

print (result_data_set)

Upvotes: 2

Related Questions