user783836
user783836

Reputation: 3519

Unable to paginate through all bing API results

I'm currently using the Bing Web Search API v7 to query Bing for search results. As per the API docs, the parameters count and offset are used to paginate through the results, the total number of which are defined in the results themselves by the value of totalEstimatedMatches.

As below from the documentation:

totalEstimatedMatches: The estimated number of webpages that are relevant to the query. Use this number along with the count and offset query parameters to page the results.

This seems to work up to a point, after which the API just continues to return the exact same results over and over, regardless of the values of count and offset.

In my specific case, the totalEstimatedMatches was set at 330,000. With a count of 50 (i.e. 50 results per request) the results begin repeating at around offset 700 i.e. 3,500 results into the estimated 330,000.

In playing with the bing front end, I have noticed a similar behaviour once the page count get sufficiently high e.g.

Am I using the API incorrectly or is this just some sort of limitation or bug in which the totalEstimatedMatches is just way off?

Upvotes: 3

Views: 1196

Answers (2)

Rob Truxal
Rob Truxal

Reputation: 6408

Technically this isn't a direct answer to the question as asked. Hopefully it's helpful to provide a way to paginate efficiently through Bing's API without having to use the "totalEstimatedMatches" return value which, as the other answer explains, can behave really unpredictably: Here's some python:

class ApiWorker(object):
    def __init__(self, q):
        self.q = q
        self.offset = 0
        self.result_hashes = set()
        self.finished = False

    def calc_next_offset(self, resp_urls):
       before_adding = len(self.result_hashes)
       self.result_hashes.update((hash(i) for i in resp_urls)) #<==abuse of set operations.
       after_adding = len(self.result_hashes)
       if after_adding == before_adding: #<==then we either got a bunch of duplicates or we're getting very few results back.
           self.finished = True
       else:
           self.offset += len(new_results)

    def page_through_results(self, *args, **kwargs):
        while not self.finished:
            new_resp_urls = ...<call_logic>...
            self.calc_next_offset(new_resp_urls) 
            ...<save logic>...
        print(f'All unique results for q={self.q} have been obtained.')

This^ will stop paginating as soon as a full response of duplicates have been obtained.

Upvotes: 0

Ronak
Ronak

Reputation: 736

totalEstimatedMatches provides total number of matches for that query around the web - that includes duplicate results and near similar content as well.

In order to optimize indexing all search engines restrict results to top N webpages. This is what you are seeing. This behavior is consistent across all the search engines as typically near all the users change a query/select a webpage/abandon within 2-3 search pages.

In short, this is not a bug/incorrect implementation but optimization of index that's restricting you from getting more results. If you really need to get more results, you can use the related searches and append the unique webpages.

Upvotes: 3

Related Questions