John Paras
John Paras

Reputation: 137

Download all csv files from URL

I want to download all csv files, any idea how I do this?

from bs4 import BeautifulSoup
import requests
url = requests.get('http://www.football-data.co.uk/englandm.php').text
soup = BeautifulSoup(url)
for link in soup.findAll("a"):
    print link.get("href")

Upvotes: 3

Views: 3223

Answers (2)

Padraic Cunningham
Padraic Cunningham

Reputation: 180391

You just need to filter the hrefs which you can do with a css selector,a[href$=.csv] which will find the href's ending in .csv then join each to the base url, request and finally write the content:

from bs4 import BeautifulSoup
import requests
from urlparse import urljoin
from os.path import basename

base = "http://www.football-data.co.uk/"
url = requests.get('http://www.football-data.co.uk/englandm.php').text
soup = BeautifulSoup(url)
for link in (urljoin(base, a["href"]) for a in soup.select("a[href$=.csv]")):
    with open(basename(link), "w") as f:
        f.writelines(requests.get(link))

Which will give you five files, E0.csv, E1.csv, E2.csv, E3.csv, E4.csv with all the data inside.

Upvotes: 1

user2096803
user2096803

Reputation:

Something like this should work:

from bs4 import BeautifulSoup
from time import sleep
import requests


if __name__ == '__main__':
    url = requests.get('http://www.football-data.co.uk/englandm.php').text
    soup = BeautifulSoup(url)
    for link in soup.findAll("a"):
        current_link = link.get("href")
        if current_link.endswith('csv'):
            print('Found CSV: ' + current_link)
            print('Downloading %s' % current_link)
            sleep(10)
            response = requests.get('http://www.football-data.co.uk/%s' % current_link, stream=True)
            fn = current_link.split('/')[0] + '_' + current_link.split('/')[1] + '_' + current_link.split('/')[2]
            with open(fn, "wb") as handle:
                for data in response.iter_content():
                    handle.write(data)

Upvotes: 1

Related Questions