crackers
crackers

Reputation: 357

Downloading txt files with request python

I would like to download multiple txt files from an API. I can download pdf files using the following code. However, would anyone be willing to help with how to customise request's document type for downloading txt files? Thanks much.

links = ["P167897", "P173997", "P166309"]

for link in links:
    end_point = f"https://search.worldbank.org/api/v2/wds?" \
                f"format=json&includepublicdocs=1&" \
                f"fl=docna,lang,docty,repnb,docdt,doc_authr,available_in&" \
                f"os=0&rows=20&proid={link}&apilang=en"
    documents = requests.get(end_point).json()["documents"]
    for document_data in documents.values():
        try:
            pdf_url = document_data["pdfurl"]
            file_path = Path(f"K:/downloading_text/{link}/{pdf_url.rsplit('/')[-1]}")
            file_path.parent.mkdir(parents=True, exist_ok=True)
            with file_path.open("wb") as f:
                f.write(requests.get(pdf_url).content)
            time.sleep(1)
        except KeyError:
            continue

Upvotes: 2

Views: 801

Answers (2)

Maurice Meyer
Maurice Meyer

Reputation: 18106

You just need to change the URL from:

.../pdf/Sierra-Leone-AFRICA-WEST-P167897-Sierra-Leone-Free-Education-Project-Procurement-Plan.pdf

to:

.../text/Sierra-Leone-AFRICA-WEST-P167897-Sierra-Leone-Free-Education-Project-Procurement-Plan.txt

which can be done easily using str.replace():

links = ["P167897", "P173997", "P166309"]

for link in links:
    end_point = f"https://search.worldbank.org/api/v2/wds?" \
                f"format=json&includepublicdocs=1&" \
                f"fl=docna,lang,docty,repnb,docdt,doc_authr,available_in&" \
                f"os=0&rows=20&proid={link}&apilang=en"
    #print(requests.get(end_point).json())
    #break
    documents = requests.get(end_point).json()["documents"]
    for document_data in documents.values():
        try:
            pdf_url = document_data["pdfurl"]
            txt_url = pdf_url.replace('.pdf', '.txt')
            txt_url = txt_url.replace('/pdf/', '/text/')
            print(f"Downloading: {txt_url}")
            uniqueId = txt_url[6:].split('/')[4]
            file_path = Path(
                f"/tmp/{link}/{uniqueId}-{txt_url.rsplit('/')[-1]}"
            )
            file_path.parent.mkdir(parents=True, exist_ok=True)
            with file_path.open("wb") as f:
                f.write(requests.get(txt_url).content)
            time.sleep(1)
        except KeyError:
            continue

Out:

Downloading: http://documents.worldbank.org/curated/en/106981614570591392/text/Official-Documents-Grant-Agreement-for-Additional-Financing-Grant-TF0B4694.txt
Downloading: http://documents.worldbank.org/curated/en/331341614570579132/text/Official-Documents-First-Restatement-to-the-Disbursement-Letter-for-Grant-D6810-SL-and-for-Additional-Financing-Grant-TF0B4694.txt
...

Upvotes: 2

Joshua Rose
Joshua Rose

Reputation: 40

If you are fine with not using requests you can normally use curl or wget if the url is open. so you could use subprocess for that. eg

import subprocess
subprocess.run(['wget', 'url'])

https://www.gnu.org/software/wget/

https://docs.python.org/3/library/subprocess.html

https://curl.se/

Upvotes: 0

Related Questions