Reputation: 4249
I’ve been looking around trying to figure out if it would be possible to have a static React app hosted in a Google Cloud Storage bucket and use Google Cloud CDN and a single Google Cloud Load Balancer to route cache misses to the bucket, manage certs, and route internal request from the React app to an API hosted in GKE?
Would it be possible to achieve this architecture, or would there be another recommended approach?
Upvotes: 9
Views: 2737
Reputation: 24270
You can have a load balancer with (two or more) route matchers, one for api.example.com with a backend to GKE, and another for static.example.com with a backend bucket.
This backend bucket would have CDN enabled. You can point multiple routes to the same backend if needed.
Specifically:
create a Kubernetes Service that is represented by a stand-alone Network Endpoint Group. This will allow you to manage the load balancer outside of GKE. Docs: https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg
Create a HTTP(S) Load Balancer, with the route(s) you want to match your API endpoint. Create a BackendService during the load balancer creation flow, and point it to the existing zonal Network Endpoint Group you created in step #1. docs: https://cloud.google.com/load-balancing/docs/https/https-load-balancer-example
Create a BackendBucket in the same flow, pointing it at the bucket you want to use for storing your static React assets. Make sure to tick the “Enable Cloud CDN” box & create a route that sends traffic to that bucket. Docs: https://cloud.google.com/cdn/docs/using-cdn#enable_existing
Finish creating the LB, which will assign IP addresses, and update DNS for your domain names to point at those IPs.
Upvotes: 8
Reputation: 2322
The first thing to take into consideration with this approach is that, the CDN sits in front of the load balancer and not the other way around. This means that there is no routing involved at the CDN. Routing is done after the content is requested by the CDN cache.
Apart from that, the CDN starts caching the contents only after the first cache miss. This means that it needs to fetch the resource for the first time only after said resource is requested by a client.
If the resource is not already cached in the CDN, then it will be routed to the backend (through the load balancer) in order to retrieve it and to make a "local copy". Of course, this requires that the resource also exists in the backend in order for the CDN to cache it.
Your approach seem to assume that the CDN will act as a different kind of persistent layer, so I believe it is still possible, but not using Cloud CDN but a Cloud Storage bucket.
Since buckets have multi-regional classes, you might be able to achieve something really similar to what you're trying with the CDN.
Update:
Considering the new premise: Using the same load balancer to route requests between the static site hosted in a GCS bucket and the API deployed in GKE, with the CDN in front of it and with support for certificates.
Although the HTTP(S) Load Balancer can manage certificates, is compatible with the Cloud CDN, can have buckets or GCE instances as backends and is the default [Ingress] option in GKE (so it's also compatible with it), this approach doesn't seem feasible.
When you expose an application on GKE using the default ingress class (GCE) that deploys this kind of load balancer, the GKE cloud controller manager is in charge of that resource and relies on the definitions deployed to GKE.
If you try to manually manage the load balancer to add a new backend, in your case, the bucket containing your static application, the changes might be reversed if a new version of the Ingress is deployed to the cluster.
In the opposite case, where you manually create the load balancer and configure its backend to serve your bucket's content: There's no support for attaching this load balancer to the GKE cluster, it has to be created within Kubernetes.
So, in a nutshell: Either you use the load balancer with the bucket or with the GKE cluster, not both due to the aforementioned design.
This of course is completely possible if you deploy 2 different load balancer (ingress
in terms of GKE) and put your CDN in front of the load balancer with the bucket. I mention this to contrast it with the information above.
Let me know if this helps :)
Upvotes: 2