J. Taylor
J. Taylor

Reputation: 4865

aiohttp: How to efficiently check HTTP headers before downloading response body?

I am writing a web crawler using asyncio/aiohttp. I want the crawler to only want to download HTML content, and skip everything else. I wrote a simple function to filter URLS based on extensions, but this is not reliable because many download links do not include a filename/extension in them.

I could use aiohttp.ClientSession.head() to send a HEAD request, check the Content-Type field to make sure it's HTML, and then send a separate GET request. But this will increase the latency by requiring two separate requests per page (one HEAD, one GET), and I'd like to avoid that if possible.

Is it possible to just send a regular GET request, and set aiohttp into "streaming" mode to download just the header, and then proceed with the body download only if the MIME type is correct? Or is there some (fast) alternative method for filtering out non-HTML content that I should consider?


UPDATE

As requested in the comments, I've included some example code of what I mean by making two separate HTTP requests (one HEAD request and one GET request):

import asyncio
import aiohttp

urls = ['http://www.google.com', 'http://www.yahoo.com']
results = []

async def get_urls_async(urls):
    loop = asyncio.get_running_loop()

    async with aiohttp.ClientSession() as session:
        tasks = []

        for u in urls:
            print(f"This is the first (HEAD) request we send for {u}")
            tasks.append(loop.create_task(session.get(u)))

        results = []
        for t in asyncio.as_completed(tasks):
            response = await t
            url = response.url

            if "text/html" in response.headers["Content-Type"]:
                print("Sending the 2nd (GET) request to retrive body")
                r = await session.get(url)
                results.append((url, await r.read()))
            else:
                print(f"Not HTML, rejecting: {url}")

        return results

results = asyncio.run(get_urls_async(urls))

Upvotes: 4

Views: 3471

Answers (1)

amohr
amohr

Reputation: 477

This is a protocol problem, if you do a GET, the server wants to send the body. If you don't retrieve the body you have to discard the connection (this is in fact what it does if you don't do a read() before __aexit__ on the response).

So the above code should do more of less what you want. NOTE the server may send in the first chunk already more than just the headers

Upvotes: 1

Related Questions