David542
David542

Reputation: 110093

How to do two requests in parallel

I have the following code which requests something from Amazon's API:

params = {'Operation': 'GetRequesterStatistic', 'Statistic': 'NumberHITsAssignable', 'TimePeriod': 'LifeToDate'}
response = self.conn.make_request(action=None, params=params, path='/', verb='GET')
data['ActiveHITs'] = self.conn._process_response(response).LongValue

params = {'Operation': 'GetRequesterStatistic', 'Statistic': 'NumberAssignmentsPending', 'TimePeriod': 'LifeToDate'}
response = self.conn.make_request(action=None, params=params, path='/', verb='GET')
data['PendingAssignments'] = self.conn._process_response(response).LongValue

Each of these requests takes about 1s waiting for Amazon to return data. How would I run both of these in parallel, so it would (ideally) take 1s to run, instead of 2s?

Upvotes: 0

Views: 319

Answers (1)

Stefano Sanfilippo
Stefano Sanfilippo

Reputation: 33046

You can use a multiprocessing.Pool to parallelize the requests:

from multiprocessing import Pool

class Foo:
    def __fetch(self, statistic):
        params = {
            'Operation': 'GetRequesterStatistic',
            'Statistic': statistic,
            'TimePeriod': 'LifeToDate'
        }
        response = self.conn.make_request(
            action=None, params=params, path='/', verb='GET'
        )
        return self.conn._process_response(response).LongValue

    def get_stats(self):
        pool = Pool()
        results = pool.map(self.__fetch, [
            'NumberHITsAssignable', 'NumberAssignmentsPending'
        ])
        data['ActiveHITs'], data['PendingAssignments'] = results

this has the nice effect of being able to parallelize any given number of requests. By default, a worker per core is created, you can change the number by passing it a parameter to the Pool.

Upvotes: 1

Related Questions