Reputation: 11
I am retrieving track data from spotify api for a total of 10 tracks, but it takes around 2-3 seconds to run. Is there any way to speed it up by using some python libraries like multiprocessing or something else.
track_url = []
track_name = []
album_image = []
for i in range(len(tracks_recommend)):
track_id = tracks_recommend.at[i, 'id']
# call to spotify api
res = spotify.track(track_id=track_id)
track_url.append(res['external_urls'])
track_name.append(res['name'])
album_image.append(res['album']['images'][0]['url'])
Upvotes: 0
Views: 287
Reputation: 671
Depends on whether Spotify tracks you and limits you to one request. If they don't, this would be what you could start with:
def process_track(track_id)
# call to spotify api
res = spotify.track(track_id=track_id)
return (res['external_urls'], res['name'], res['album']['images'][0]['url'])
with Pool(4) as p: # replace 4 with whatever number you want
track_ids = [tracks_recommend.at[i, 'id'] for i in range(len(tracks_recommend))]
output = p.map(process_track, track_ids)
track_url, track_name, album_image = zip(*output)
This won't help you with latency, but it might increase throughput.
Upvotes: 0
Reputation: 226486
Is there any way to speed it up by using some python libraries like multiprocessing
Yes, multiprocess works great running API requests in parallel. This will get you started:
from multiprocessing.pool import ThreadPool as Pool
def recommend(track_id):
return spotify.track(track_id=track_id)
track_ids = [tracks_recommend.at[i, 'id']
for i in range(len(tracks_recommend))]
with Pool(5) as pool:
for res in pool.map(recommend, track_ids):
...
Upvotes: 1