paquino
paquino

Reputation: 45

Python request based on a list

I have a text file with entries like this (url.py):

import requests

headers = {
'authority': 'www.spain.com',
'pragma': 'no-cache',
'cache-control': 'no-cache',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'sec-fetch-site': 'none',
'sec-fetch-mode': 'navigate',
'sec-fetch-user': '?1',
'sec-fetch-dest': 'document',
'accept-language': 'en-US,en;q=0.9,pt;q=0.8',
}

links=['https://www.spain.com']
for url in links:
    page = requests.get(url, headers=headers)
    print(page)

Return

ubuntu@OS-Ubuntu:/mnt/$ python3 url.py
<Response [200]>

I need this to be filled in automatically because I will receive a txt file (domain.txt) with the domains like this:

www.spain.com
www.uk.com
www.italy.com

I wanted the python script to be unique and transversal ... I would just add more domains to my domain.txt and then I would run my url.py and it would automatically make the request on all domains of domain.txt

You can help me with that.

Upvotes: 1

Views: 242

Answers (1)

JonasUJ
JonasUJ

Reputation: 108

Assuming url.py is located in the same directory as domains.txt, you can open the file and read each link into a list using:

with open('domains.txt', 'r') as f:
    links = f.read().splitlines()

Upvotes: 2

Related Questions