Reputation: 481
How do I set access permissions for entire folder in storage bucket? Example; I have 2 folders (containing many subfolders/objects) in single bucket (let's call them folder 'A' and 'B') and 4 members in project team. All 4 members can have read/edit access for folder A but only 2 of the members are allowed to have access to folder 'B'. Is there a simple way to set these permissions for each folder? There are hundreds/thousands of files within each folder and it would be very time consuming to set permissions for each individual file. Thanks for any help.
Upvotes: 20
Views: 31932
Reputation: 657
fine-grain-test-biswalc/
├── test1/
│ ├── __init__.py
│ └── utils.py
└── test2/
├── __init__.py
└── globals.py
test1
directory in the bucket.test2
directory, they should get an error. IAM & Admin
> Service Accounts
> CREATE SERVICE ACCOUNT
Service Account
> Keys
tab > ADD KEY
> CREATE NEW KEY
> JSON
gcloud auth activate-service-account --key-file=my-key.json
IAM & Admin
> IAM
> Roles
> CREATE ROLE
Storage.Objects.List
General Availability
storage.objects.list
IAM & Admin
> IAM
> GRANT ACCESS
my-sa.project-id.iam.gserviceaccount.com
Storage.Objects.List
Cloud Storage
> CREATE
fine-grain-test-biswalc
Choose how to control access to objects
> Access control
> Uniform
Folder browser
> Click on Three dots for test1
> Click Edit access
ATTACH MANAGED FOLDER
# Permissions for test1/
ADD PRINCIPAL
my-sa.project-id.iam.gserviceaccount.com
Assign roles
> Storage Admin
gsutil -m cp -r "gs://fine-grain-test-biswalc/test1" .
gsutil -m cp -r "gs://fine-grain-test-biswalc/test2" .
Error snippet:
Copying gs://fine-grain-test-biswalc/test2/__init__.py...
Copying gs://fine-grain-test-biswalc/test2/globals.py...
AccessDeniedException: 403 HttpError accessing <https://storage.googleapis.com/download/storage/v1/b/fine-grain-test-biswalc/o/test2%2Fglobals.py?generation=XXXXXXX&alt=media>: response: <{'content-type': 'text/html; charset=UTF-8', 'date': 'Fri, 26 Apr 2024 22:14:56 GMT', 'vary': 'Origin, X-Origin', 'x-guploader-uploadid': 'XXXXXXX-XXXXXX', 'expires': 'Fri, XXXXXX GMT', 'cache-control': 'private, max-age=0', 'content-length': 'XXX', 'server': 'UploadServer', 'alt-svc': 'h3=":443"; ma=XXX,h3-29=":XXX"; ma=XXX', 'status': '403'}>, content <my-sa.project-id.iam.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object. Permission 'storage.objects.get' denied on resource (or it may not exist).>
CommandException: 1 file/object could not be transferred.
Upvotes: 0
Reputation: 409
As for 2024, Google Cloud Storage managed folders are now in preview.
With managed folders, you can organize your objects into groups and set IAM policies for more granular access control over data segments within a bucket.
https://cloud.google.com/storage/docs/managed-folders
Upvotes: 3
Reputation: 1
I tried all suggestions here including providing access with CEL. Then I come across why everyone is not successful in resolving this issue is because GCP does not treat folders as existing.
From https://cloud.google.com/storage/docs/folders:
Cloud Storage operates with a flat namespace, which means that folders don't actually exist within Cloud Storage. If you create an object named folder1/file.txt in the bucket your-bucket, the path to the object is your-bucket/folder1/file.txt, but there is no folder named folder1; instead, the string folder1 is part of the object's name.
It's just a visual representation that provides us a hierarchical feel of the bucket and objects within it.
Upvotes: -1
Reputation: 182
It looks like this has become possible through IAM Conditions.
You need to set a IAM Condition like:
resource.name.startsWith('projects/_/buckets/[BUCKET_NAME]/objects/[OBJECT_PREFIX]')
This condition can't be used for the permission storage.objects.list
though. Add two roles to a group/user. The first one to grant list access to the whole bucket and the second one containing the condition above to allow read/write access to all objects in your "folder". Like this the group/user can list all objects in the bucket, but can only read/download/write the allowed ones.
There are some limitations here, such as no longer being able to use the gsutil acl ch
commands referenced in other answers.
Upvotes: 15
Reputation: 19
If doesn't work, 3. add yourself as gcs admin, legacy reader/writer permission. (which is irrelevant). But worked for me.
Upvotes: 1
Reputation: 181
Leaving this here so someone else doesn't waste an afternoon beating their head against this wall. It turns out that 'list' permissions are handled at the bucket level in GCS and you can't restrict them using a Condition based on object name prefix. If you do, you won't be able to access any resources in the bucket, so you have to setup the Member with unrestricted 'Storage Object Viewer' role and use Conditions with specified object prefix for 'Storage Object Admin' or 'Storage Object Creator' to restrict (over)write access. Not ideal if you are trying to keep the contents of your bucket private.
https://cloud.google.com/storage/docs/access-control/iam
"Since the storage.objects.list permission is granted at the bucket level, you cannot use the resource.name condition attribute to restrict object listing access to a subset of objects in the bucket. Users without storage.objects.list permission at the bucket level can experience degraded functionality for the Console and gsutil."
Upvotes: 18
Reputation: 4743
It's very poorly documented, but search for "folder" in the gsutil acl ch
manpage:
Grant the user with the specified canonical ID READ access to all objects in example-bucket that begin with folder/:
gsutil acl ch -r \ -u 84fac329bceSAMPLE777d5d22b8SAMPLE785ac2SAMPLE2dfcf7c4adf34da46:R \ gs://example-bucket/folder/
Upvotes: 12
Reputation: 38369
You cannot do this in GCS. GCS provides permissions to buckets and permissions to objects. A "folder" is not a GCS concept and does not have any properties or permissions.
Upvotes: 2