Matt Warner
Matt Warner

Reputation: 23

Google Container Registry Per Image ACLs

We have structure within our platform that requires a large number of private images within a single and/or only a few projects if possible. Additionally we are largely a GCP shop and would love to stay within the Google environment.

Currently - as I understand it - GCR ACL structures require the storage.objects.get and storage.objects.list permissions (or the objectViewer role) attached to a service account (in this case) to access the GCR. This isn't an issue generally and we haven't had any direct issues with using gsutil to enable read access at the project level for the container registry. Below is a workflow example of what we're doing to achieve general access. However, it does not achieve our goal of restricted service account per image access.


Simple Docker Image is built, tagged, and pushed into GCR, using exproj in place of used project name.

sudo docker build -t hello_example:latest
sudo docker tag hello_example:latest gcr.io/exproj/hello_example:latest
sudo docker push gcr.io/exproj/hello_example:latest

This provides us with the hello_example repository in the ex_proj project. We create a service account and give it permissions to read out of the the bucket.

gsutil acl ch -u [email protected]:R gs://artifacts.exproj.appspot.com/
Updated ACL on gs://artifacts.exproj.appspot.com

Which then allows us to use the Docker login via the key.

sudo docker login -u _json_key --password-stdin https://gcr.io < gcr-read-2.json
Login Succeeded

And then pull down the image from the registry as expected

sudo docker run gcr.io/exproj/hello_example

However, for our purposes we do not want to allow the service account to have access to the entire registry per project, but rather only have access to hello_example as identified above. In my testing with gsutil, I'm unable to define specific per-image based ACLs, but, I'm wondering if I'm just missing something.

gsutil acl ch -u [email protected]:R gs://artifacts.exproj.appspot.com/hello_example/
CommandException: No URLs matched: gs://artifacts.exproj.appspot.com/hello_example/

In the grand scheme of it all, we would like to hit the following model:

While we could do per-project per-Account container registry, the scaling potential of needing to track projects across each Account and being at the whim of GCP project-limitations during a heavy use period is worrying.

I'm open to any ideas or structures that would achieve this other than the above as well!


EDIT

Thanks to jonjohnson for responding! I wrote a quick and dirty script along the recommended lines pertaining to blob reading. I'm working on validating it's success still, but, I did want to state that we do control when pushes occur, therefore tracking the results is less fragile than it could be in other situations.

Here's a script I put together as an example for manifest -> digest permission modifications.

require 'json'

# POC GCR Blob Handler
# ---
# Hardcoded params and system calls for speed
# Content pushed into gcr.io will be at gs://artifacts.{projectid}.appspot.com/containers/images/ per digest

def main()
    puts "Running blob gathering from manifest for org_b and example_b"
    manifest = `curl -u _token:$(gcloud auth print-access-token) --fail --silent --show-error https://gcr.io/v2/exproj/org_b/manifests/example_b`
    manifest = JSON.parse(manifest)
    # Manifest is parsed, gather digests to ensure we allow permissions to correct blobs
    puts "Gathering digests to allow permissions"
    digests = Array.new
    digests.push(manifest["config"]["digest"])
    manifest["layers"].each {|l| digests.push(l["digest"])}
    # Digests are now gathered for the config and layers, loop through the digests and allow permissions to the account
    puts "Digests are gathered, allowing read permissions to no-perms account"
    digests.each do |d|
        puts "Allowing permissions for #{d}"
        res = `gsutil acl ch -u [email protected]:R gs://artifacts.exproj.appspot.com/containers/images/#{d}`
        puts res
    end
    puts "Permissions changed for org_b:example_b for [email protected]"
end

main()

While this does appropriate set permissions, I'm seeing a fair amount of fragility on the actual authentication to Docker and pulling down in regard to Docker logins not being identified.

Was this along the lines that you were referring to jonjohnson? Essentially allowing access per blob per service account based on manifest/layers associated with that image/tag?

Thanks!

Upvotes: 2

Views: 951

Answers (1)

jonjohnson
jonjohnson

Reputation: 346

There's not currently an easy way to do what you want.

One thing you can do is grant access to individual blobs in your bucket for each image. This isn't super elegant because you'd have to update the permissions after every push.

You could automate that yourself by using the pubsub support in GCR to listen for pushes, look at the blobs referenced by that image, match the repository path to whichever service accounts need access, then grant those service accounts access to each blob object.

One downside is that each service account will still be able to look at the image manifest (essentially a list of layer digests + some image runtime config). They won't be able to pull the actual image contents, though.

Also, this relies a bit on some implementation details of GCR, so it might break in the future.

Upvotes: 1

Related Questions