Reputation: 799
I extracted links to images from https://www.topgear.com/car-reviews/ferrari/laferrari using scrapy in bash, is there any simple way to save them without a pipeline?
scrapy shell https://www.topgear.com/car-reviews/ferrari/laferrari
response.xpath('//div[@class="carousel__content-inner"]//img/@srcset').extract()
['https://www.topgear.com/sites/default/files/styles/fit_980x551/public/cars-car/carousel/2015/02/buyers_guide_-_laf_-_front.jpg?itok=KiD7ErMe 980w',
'https://www.topgear.com/sites/default/files/styles/fit_980x551/public/cars-car/carousel/2015/02/buyers_guide_-_laf_-_rear.jpg?itok=JMYaaJ5L 980w',
'https://www.topgear.com/sites/default/files/styles/fit_980x551/public/cars-car/carousel/2015/02/buyers_guide_-_laf_-_interior.jpg?itok=4Z0zIdH_ 980w',
'https://www.topgear.com/sites/default/files/styles/fit_980x551/public/cars-car/carousel/2015/02/buyers_guide_-_laf_-_side.jpg?itok=OKl2MOJ2 980w']
Thanks for any help.
Upvotes: 2
Views: 249
Reputation: 1547
You can use scrapy Selector https://docs.scrapy.org/en/latest/topics/selectors.html and requests library:
from scrapy.selector import Selector
import requests
from tqdm import tqdm
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0'}
response = requests.get('https://www.topgear.com/car-reviews/ferrari/laferrari', headers=headers)
links = Selector(text=response.text).xpath('//div[@class="carousel__content-inner"]//img/@srcset').getall()
for i, image_url in tqdm(enumerate(links)):
try:
response = requests.get(image_url, headers=headers)
except:
pass
else:
if response.status_code == 200:
with open('{:02}.jpg'.format(i), 'wb') as f:
f.write(response.content)
Upvotes: 2