nonopolarity
nonopolarity

Reputation: 151264

To serve static files, is it a good practice to make this functionality into a Docker image and be part of Kubernetes?

Short question: if the purpose is to serve static files using Nginx, does it make sense to make it into Docker / Kubernetes, vs simply having Nginx machine(s) serve the files?

Details:

For example, to serve React frontend code using Nginx, one way is simply to have a machine that runs Nginx and serve the files, and be done with it.

Another approach is to actually make this functionality into a Docker image, and make it part of Kubernetes. Then on one machine, there might be different Pods that run this Docker image. Is this a good Kubernetes practice?

Currently, the data will be served by Ruby on Rails servers, but in the future, this functionality may also be made into a Docker image too. So the current plan is to detach the frontend React code from the Rails server and be served by Nginx inside of Kubernetes. Is it true that when the backend data is also in Kubernetes, then a physical computer can be better utilized if Nginx is, for example, letting the processor not working hard enough, and so some processor-heavy Pod can be made to run on this machine, and therefore achieve better utilization? I can only think of such reason to make the Nginx static file serving into Kubernetes instead of just running an Nginx machine. The other reason may be if the Nginx became slow or crashed, and Kubernetes can stop the whole Pod and re-spawn a fresh one.

Upvotes: 0

Views: 1234

Answers (1)

David Maze
David Maze

Reputation: 160073

Does it make sense? Yes. Is it obligatory? No.

There are a couple of good reasons to want to use Docker for a static file server. Consistency of deployment is a big one: if you consistently deploy things using docker build; docker push; kubectl apply then it's easy to know how to update the front-end app too. As you note, an nginx server isn't especially resource-heavy, so running it in a container lets it run on shared hardware, and you can take advantage of Kubernetes niceties like pod replicas for HA and zero-downtime upgrades.

    +--------------------------------------+
    |k8s   /--> nginx --\       /--> Rails |
--->|------+--> nginx --+-------+--> Rails |
    | (LB) \--> nginx --/ (CIP) \--> Rails |
    +--------------------------------------+

If you already have an nginx proxy, it could make sense to serve files from there. Architecturally you will probably need some proxy to reach your cluster from outside anyways, and you can make that serve static files. If you already have a non-container deployment system and are trying to incrementally migrate into Kubernetes, this is a part that can be left until later.

 :   DMZ    : Private network
 :          :  +-----------------+
 :          :  |k8s   /--> Rails |
---> nginx --->|------+--> Rails |
 :          :  | (NP) \--> Rails |
 :          :  +-----------------+

(In this last diagram, a Kubernetes Ingress controller can take the place of that perimeter nginx proxy, and it's reasonable to run nginx in-cluster with an nginx Ingress in front of it as well. LB, NP, and CIP refer to LoadBalancer, NodePort, and ClusterIP Kubernetes Service objects.)

A third possible deployment path is to use your cloud provider's file-storage service (e.g., AWS S3) in this role. This will have a couple of advantages: it is probably extremely reliable, and it can readily hold the hash-stamped files that tools like Webpack produce. In this sense there isn't an "update" per se; you just upload the hash-stamped files for the new build and a new index.html file that points at those.

Which way to go here depends on how familiar you are with these other tools and how much you want to maintain them. An all-Docker/all-Kubernetes path is very reasonable; so is relying on your public-cloud provider for basic file service.

Upvotes: 2

Related Questions