patach
patach

Reputation: 3

scraping multiple URLs with bs4

I am trying to compile patent files from the USPTO webpage with BeautifulSoup.

df['link']
urls=df['link'].to_numpy()
urls
for i in urls:
    page = requests.get(i)
    ## storing the content of the page in a variable
    txt = page.text
    ## creating BeautifulSoup object
    soup = bs4.BeautifulSoup(txt, 'html.parser')
    soup

however, it only prints one of the URLs, not all 5 links. I NEED all 5 links scraped as text.

Any suggestions appreciated. Cheers

LINKS I NEED TO SCRAPE#

array(['http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n',
       'http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=2&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n',
       'http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=3&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n',
       'http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=4&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n',
       'http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=5&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n'],
      dtype=object)

Upvotes: -1

Views: 75

Answers (1)

import pandas as pd


def Main(url):
    for item in range(1, 6):
        df = pd.read_html(url.format(item), attrs={
                          'width': '100%'}, skiprows=1, match=r"^\d{4}/\d+")[0]
        df.to_csv("data.csv", index=False, mode="a")


Main("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r={}&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n")

Output: View Online

ScreenShot:

enter image description here

New Code per use-request in comments:

import requests


def Main(url):
    for item in range(1, 6):
        with requests.Session() as req:
            r = req.get(url.format(item))
            with open("data.txt", 'a') as f:
                f.writelines(r.text)


Main("http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r={}&f=G&l=50&co1=AND&d=PTXT&s1=g06n.CPCL.&OS=CPCL/g06n&RS=CPCL/g06n")

Upvotes: 0

Related Questions