user2771609
user2771609

Reputation: 1903

How does paging work in the list_blobs function in Google Cloud Storage Python Client Library

I want to get a list of all the blobs in a Google Cloud Storage bucket using the Client Library for Python.

According to the documentation I should use the list_blobs() function. The function appears to use two arguments max_results and page_token to achieve paging. I am not sure how use them.

In particular, where do I get the page_token from?

I would have expected that list_blobs() would provide a page_token for use in subsequent calls, but I cannot find any documentation on it.

In addition, max_results is optional. What happens if I don't provide it? Is there a default limit? If so, what is it?

Upvotes: 15

Views: 8397

Answers (5)

Akhilesh Siddhanti
Akhilesh Siddhanti

Reputation: 100

I wanted to extract only the subfolders out of a GCP bucket. Some of the other methods do work but only if they all fit in a particular page. I found something like the following that works for me:

blobs = self.gcp_client.list_blobs(bucket_name, prefix='subfolder/', delimiter='/')

for page in blobs.pages:
    print("Subfolders on Page: ", page.prefixes)

Sharing this answer if this helps anyone.

Upvotes: 1

Victor Klapholz
Victor Klapholz

Reputation: 61

Please read the inline comments:

from google.cloud import storage

storage = storage.Client()

bucket_name = ''  # Fill here your bucket name

# This will limit number of results - replace this with None in order to get all the blobs in the bucket
max_results = 23_344 

# Please specify the "nextPageToken" in order to trigger an implicit pagination 
# (which is managed for you by the library).
# Moreover, you'll need to specify the "items" with all the fields you would like to fetch.
# Here are the supported fields: https://cloud.google.com/storage/docs/json_api/v1/objects#resource

fields = 'items(name),nextPageToken'

counter = 0
for blob in storage.list_blobs(bucket_name, fields=fields, max_results=max_results):
    counter += 1
    print(counter, ')', blob.name)

Upvotes: 2

Luke
Luke

Reputation: 707

I'm just going to leave this here. I'm not sure if the libraries have changes in the last 2 years since this answer was posted, but if you're using prefix, then for blob in bucket.list_blobs() doesn't work right. It seems like getting blobs and getting prefixes are fundamentally different. And using pages with prefixes is confusing.

I found a post in a github issue (here). This works for me.

def list_gcs_directories(bucket, prefix):
    # from https://github.com/GoogleCloudPlatform/google-cloud-python/issues/920
    iterator = bucket.list_blobs(prefix=prefix, delimiter='/')
    prefixes = set()
    for page in iterator.pages:
        print page, page.prefixes
        prefixes.update(page.prefixes)
    return prefixes

A different comment on the same issue suggested this:

def get_prefixes(bucket):
    iterator = bucket.list_blobs(delimiter="/")
    response = iterator._get_next_page_response()
    return response['prefixes']

Which only gives you the prefixes if all of your results fit on a single page.

Upvotes: 1

user2771609
user2771609

Reputation: 1903

list_blobs() does use paging, but you do not use page_token to achieve it.

How It Works:

The way list_blobs() work is that it returns an iterator that iterates through all the results doing paging behind the scenes. So simply doing this will get you through all the results, fetching pages as needed:

for blob in bucket.list_blobs()
    print blob.name

The Documentation is Wrong/Misleading:

As of 04/26/2017 this is what the docs says:

page_token (str) – (Optional) Opaque marker for the next “page” of blobs. If not passed, will return the first page of blobs.

This implies that the result will be a single page of results with page_token determining which page. This is not correct. The result iterator iterates through multiple pages. What page_token actually represents is which page the iterator should START at. It no page_token is provided it will start at the first page.

Helpful To Know:

max_results limits the total number of results returned by the iterator.

The iterator does expose pages if you need it:

for page in bucket.list_blobs().pages:
    for blob in page:
        print blob.name

Upvotes: 20

bw4sz
bw4sz

Reputation: 2257

It was a bit confusing, but I found the answer here

https://googlecloudplatform.github.io/google-cloud-python/latest/iterators.html

You can iterate through the pages and call the items needed

iterator=self.bucket.list_blobs()        

self.get_files=[]        
for page in iterator.pages:
    print('    Page number: %d' % (iterator.page_number,))
    print('  Items in page: %d' % (page.num_items,))
    print('     First item: %r' % (next(page),))
    print('Items remaining: %d' % (page.remaining,))
    print('Next page token: %s' % (iterator.next_page_token,))        
    for f in page:
        self.get_files.append("gs://" + f.bucket.name + "/" + f.name)

print( "Found %d results" % (len( self.get_files))) 

Upvotes: 0

Related Questions