Reputation: 746
First off, let me say that I'm fairly new to Kubernetes and Microservices architecture in general. Let me also say that this is more of a high-level architectural question versus seeking prescriptive, how-to advice related to the technology. I think I can figure the implementation details out, but I'm unsure what mechanisms exist to get me to where I want to go.
Simple "e-commerce" sample application running on a K8s cluster. SPA Front end, with .NET Core services for the API.
Angular SPA built and deployed to an NGinx container that serves up the Angular App as a static site. It's running as a LoadBalancer
service on the cluster.
Two simple services running as ClusterIP
in the cluster. Let's call them Product and Order.
When the SPA is deployed, it's making requests to the services from the user's browser versus the container. The services aren't running in a LoadBalancer
configuration, so they're not exposed outside the cluster. What's the best practice here for getting the client application talking to the services? Specifically:
<cluster>/api
endpoint, that routes to the appropriate backend service?I've searched through SO and have found similar questions, but none that ask exactly this question. I'll gladly delete this question if someone points me to an existing one that enlightens me.
Upvotes: 1
Views: 1379
Reputation: 21
One think is the Angular app that you can serve as static web site, or can be deployed into another CDP .. andthe other is this:
Is there some kind of proxy technique, where I can expose a single /api endpoint, that routes to the appropriate backend service?
I think you should use kubernetes ingress (nginx by default, but you can use traefik, etc..), the point here is that you only have to expose the kubernetes ingress with a load balancer service, then you can route your traffic with the ingress rules.
I don't know where are you working... cloud or on-premise. I have to do this on-premise and I'am using MetalLb as a load balancer. MetalLb
I recomend you to watch the videos of this amazing channel: justmeandopensource
nginx ingress bare Metal with ha proxy as load balancer
Sorry if I miss something fast reply here ! :)
Upvotes: 0
Reputation: 2076
Do I really need to expose every microservice externally?
No, you shouldn't.
Is there some kind of proxy technique, where I can expose a single /api endpoint, that routes to the appropriate backend service?
Standard way is to use nginx as a proxy.
Any referenceable repos I can look at for an example?
You can check my toy project:
Specifically, this is how you reference back-end api in nginx from the toy project above (note it's using websockets, which may not be the case for you):
location /api {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; # aws version - essentially this sets https schema
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Upvotes: 3