Sergey Luchko
Sergey Luchko

Reputation: 3336

How to do Python parallel API request processing?

I'm writing REST service on Python(Django), and this service should incorporate with another REST service by it's API.

Here some code and time of lines:

connection = statServer("myname", "mypassword")

q1 = connection.getJSONdict("query1") # approximately 15 seconds 
q2 = connection.getJSONdict("query2") # approximately 20 seconds
q3 = connection.getJSONdict("query3") # approximately 15 seconds

# my processing approximately 0.01 of second
# merge q1 + q2 + q3

It's clear to me that each request getJSONdict("query") actually do nothing apart from waiting on I/O, so it doesn't consume processor time.

Requests are sequentially, thus I could run them on separate threads. I know allegedly that Python don't provide real threading, but in my case I have waiting on I/O so I can to do something like threading.

I think this it is real often user case for Python, and if you have dealt with something like this task, please help to solve mine.


I have thoughts about Fork/Join framework or better will be ThreadExecutorPull to consume my requests (and for reusing threads) from all requests in my REST service.


Upvotes: 0

Views: 996

Answers (1)

Sergey Luchko
Sergey Luchko

Reputation: 3336

I have managed to do it by myself.

from multiprocessing.pool import Pool, ThreadPool
# ... others imports

# You can dicede here to use processes or threads,
# if you want threads change Pool() to ThreadPool()
pool = Pool()
connection = statServer("myname", "mypassword")

res = pool.map(connection.getJSONdict, ["query1", "query2", "query3"])
print(res)

Upvotes: 2

Related Questions