Reputation: 5
I am crawling urls from a csv file, and each url has a name. How can I download these urls and save them with their names?
reader = csv.reader(open("source1.csv"))
for Name,Sources1 in reader:
urls.append(Sources1)
class Spider(scrapy.Spider):
name = "test"
start_urls = urls[1:]
def parse(self, response):
filename = **Name** + '.pdf' //how can I get the names I read from the csv file?
Upvotes: 0
Views: 188
Reputation: 23896
Perhaps you want to override the start_requests() method instead of using start_urls?
Example:
class MySpider(scrapy.Spider):
name = 'test'
def start_requests(self):
data = read_csv()
for d in data:
yield scrapy.Request(d.url, meta={'name': d.name})
The meta
dict for request will be repassed to the response, so you can later do:
def parse(self, response):
name = response.meta.get('name')
...
Upvotes: 2