Reputation: 2168
I'm trying to make a bunch of requests (~1000) using Asyncio and the aiohttp library, but I am running into a problem that I can't find much info on.
When I run this code with 10 urls, it runs just fine. When I run it with 100+ urls, it breaks and gives me RuntimeError: Event loop is closed
error.
import asyncio
import aiohttp
@asyncio.coroutine
def get_status(url):
code = '000'
try:
res = yield from asyncio.wait_for(aiohttp.request('GET', url), 4)
code = res.status
res.close()
except Exception as e:
print(e)
print(code)
if __name__ == "__main__":
urls = ['https://google.com/'] * 100
coros = [asyncio.Task(get_status(url)) for url in urls]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(coros))
loop.close()
The stack trace can be found here.
Any help or insight would be greatly appreciated as I've been banging my head over this for a few hours now. Obviously this would suggest that an event loop has been closed that should still be open, but I don't see how that is possible.
Upvotes: 23
Views: 26245
Reputation: 229
This is a bug in the interpreter. Fortunately, it was finally fixed in 3.10.6 so you just need to update your installed Python.
Upvotes: 1
Reputation: 13415
You're right, loop.getaddrinfo uses a ThreadPoolExecutor
to run socket.getaddrinfo
in a thread.
You're using asyncio.wait_for with a timeout, which means res = yield from asyncio.wait_for...
will raise a asyncio.TimeoutError
after 4 seconds. Then the get_status
coroutines return None
and the loop stops. If a job finishes after that, it will try to schedule a callback in the event loop and raises an exception since it is already closed.
Upvotes: 7
Reputation: 17366
The bug is filed as https://github.com/python/asyncio/issues/258 Stay tuned.
As quick workaround I suggest using custom executor, e.g.
loop = asyncio.get_event_loop()
executor = concurrent.futures.ThreadPoolExecutor(5)
loop.set_default_executor(executor)
Before finishing your program please do
executor.shutdown(wait=True)
loop.close()
Upvotes: 18