Reputation: 1179
My use-case requires pass-through SSL, so we unforunately can't use path-based routing natively in Openshift. Our next best solution was to set up an internal NGINX proxy to route traffic from a path to another web UI's Openshift route. I'm getting errors when doing so.
Here's my simplified NGINX config:
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /etc/nginx/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
upstream app1-ui-1-0 {
server app1-1-0.192.168.99.100.nip.io:443;
}
server {
listen 8443 ssl default_server;
location /apps/app1/ {
proxy_pass https://app1-ui-1-0/;
}
}
}
My app1 route configuration is as follows:
apiVersion: v1
kind: Route
metadata:
name: app1-1-0
spec:
host: app1-1-0.192.168.99.100.nip.io
to:
kind: Service
name: app1-1-0
tls:
insecureEdgeTerminationPolicy: Redirect
termination: passthrough
When I hit https://app1-1-0.192.168.99.100.nip.io
, the app works fine.
When I hit the NGINX proxy route url (https://proxier-1-0.192.168.99.100.nip.io
), it properly loads up the nginx's standard index.html place.
However, when I try to hit app1 through the proxy via https://proxier-1-0.192.168.99.100.nip.io/apps/apps1/
, I get the following Openshift error:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Via logs and testing, I know the request is getting into the /apps/app1/
location block, but it never gets to app1's NGINX. I've also confirmed this error is coming from either app1's router or service, but I don't know how to troubleshoot since neither has logs. Any ideas?
Upvotes: 3
Views: 4983
Reputation: 1779
When you want to make a request to some other application running in the same OpenShift cluster, the correct solution in most cases is to use the internal DNS.
OpenShift ships with a SDN which enables comms between Pods. This is more efficient than communicating to another Pod via its route since this will typically route the request back onto the public internet before it hits the OpenShift router again and is at that point forwarded via the SDN.
Services can be reached <service>.<pod_namespace>.svc.cluster.local
which in your case enables NGINX to proxy via server apps1-1-0.myproject.svc.cluster.local
Routes should typically be used to route external traffic into the cluster.
See OpenShift docs for more details on networking
Upvotes: 6
Reputation: 1179
Per a comment above, I ended up dropping the route and referencing the service's internal DNS in NGINX's upstream:
upstream finder-ui-1-0 {
server apps1-1-0.myproject.svc.cluster.local:443;
}
This suited my needs just fine and worked well.
Upvotes: 1