Ken
Ken

Reputation: 11

Is there a better, simpler way to download multiple files?

I went on the NYC MTA website to download some turnstile data and came up with a script to download only 2017 data on Python.

Here is the script:

import urllib
import re

html = urllib.urlopen('http://web.mta.info/developers/turnstile.html').read()
links = re.findall('href="(data/\S*17[01]\S*[a-z])"', html)

for link in links:
    txting = urllib.urlopen('http://web.mta.info/developers/'+link).read()
    lin = link[20:40]
    fhand = open(lin,'w')
    fhand.write(txting)
    fhand.close()

Is there a simpler way to write this script?

Upvotes: 0

Views: 1957

Answers (2)

aquil.abdullah
aquil.abdullah

Reputation: 3157

The code below should do what you need.

import requests
import bs4
import time
import random
import re

pattern = '2017'
url_base = 'http://web.mta.info/developers/'
url_home = url_base + 'turnstile.html'
response = requests.get(url_home)
data = dict()

soup = bs4.BeautifulSoup(response.text)
links = [link.get('href') for link in soup.find_all('a', 
text=re.compile('2017'))]
for link in links:
    url = url_base + link
    print "Pulling data from:", url
    response = requests.get(url)
    data[link] = response.text # I don't know what you want to do with the data so here I just store it to a dict, but you could store it to a file as you did in your example.
    not_a_robot = random.randint(2, 15)
    print "Waiting %d seconds before next query." % not_a_robot
    time.sleep(not_a_robot) # some APIs will throttle you if you hit them too quickly

Upvotes: 0

zbw
zbw

Reputation: 962

As suggested by @dizzyf, you can use BeautifulSoup to get the href values from the web page.

from BS4 import BeautifulSoup
soup = BeautifulSoup(html)
links = [link.get('href') for link in soup.find_all('a') 
                          if 'turnstile_17' in link.get('href')]

If you don't have to do get the files in Python, (and you're on a system with the wget command), you can write the links to a file:

with open('url_list.txt','w') as url_file:
    for url in links:
        url_file.writeline(url)

Then download them with wget:

$ wget -i url_list.txt

wget -i downloads all the URLs from the file into the current directory, preserving the filenames.

Upvotes: 2

Related Questions