MobliMic
MobliMic

Reputation: 313

AccessDeniedException: 403 Forbidden on GCS using owner account

I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.

gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/

But all this returns is AccessDeniedException: 403 Forbidden

I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.

I've checked these posts but none seem to have a solution thats helped me

Can't access resource as OWNER despite the fact I'm the owner

gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE

gsutil cors set command returns 403 AccessDeniedException

Upvotes: 16

Views: 16813

Answers (3)

mxxk
mxxk

Reputation: 10264

tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.


NOTE: This explanation applies to GCS buckets using uniform bucket-level access.

In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.

This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.

But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.

The Storage Admin role has the following permissions which are not present in the Owner role: [1]

  • storage.buckets.get
  • storage.buckets.getIamPolicy
  • storage.buckets.setIamPolicy
  • storage.buckets.update
  • storage.multipartUploads.abort
  • storage.multipartUploads.create
  • storage.multipartUploads.list
  • storage.multipartUploads.listParts
  • storage.objects.create
  • storage.objects.delete
  • storage.objects.get
  • storage.objects.getIamPolicy
  • storage.objects.list
  • storage.objects.setIamPolicy
  • storage.objects.update

This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.

Footnotes

  1. Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:

    • Go to "IAM & Admin"
    • Go to "Roles"
    • Filter for "Owner"
    • Go to "Owner"
    • (See list of permissions)

Upvotes: 0

Aladejubelo Oluwashina
Aladejubelo Oluwashina

Reputation: 428

Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!

Upvotes: 7

Brandon Yarbrough
Brandon Yarbrough

Reputation: 38389

This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).

I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."

The command to do that would be:

gsutil acl set -R project-private gs://muBucketName/

Upvotes: 6

Related Questions