Reputation: 1192
I'm working on the setup of a new Rails project, hosted with Google Kubernetes Engine. Everything was going fine until I switched my deployed server to production mode, with RAILS_ENV=production
.
My Kubernetes pods don't reach the ready state anymore. The readiness probe is forbidden to hit the server apparently, since it return a 403 code.
When I run kubectl describe pod <name>
on a stuck pod, I get this :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m25s default-scheduler Successfully assigned front to gke-interne-pool
Normal Pulling 5m24s kubelet Pulling image "registry/image:latest"
Normal Pulled 5m24s kubelet Successfully pulled image "registry/image:latest"
Normal Created 5m24s kubelet Created container front
Normal Started 5m24s kubelet Started container front
Warning Unhealthy 11s (x19 over 4m41s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 403
The return of kubectl logs <name>
for this pod shows indeed no request from the probe.
But when I launch a console with kubectl exec -it deploy/front -- bash
, I can make a curl -s http://localhost:3000
, which works perfectly, is displayed in the logs and returns 200.
My setup works in development mode but not in production, and so the Rails 6 app config is the main suspect. Something that I don't understand in the production mode of Rails 6 forbid my readiness probes to contact my pod.
Just in case, the readiness part of deployment.yaml :
spec:
containers:
- name: front
image: registry/image:latest
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: "/"
port: 3000
initialDelaySeconds: 30
periodSeconds: 15
Upvotes: 1
Views: 7048
Reputation: 111
Another potential cause of readiness 403 errors with Rails 6 is the allowed hosts list defined in config.
Typically, you'll have a line of code in config/environments/production.rb
that looks something like:
config.hosts << "www.mydomain.com"
This leads to the rejection of any requests that come from hosts other than "www.mydomain.com". The readiness checks come from a private IP address within your cluster so they're going to be rejected given the above config.
One way to get around this is by adding an additional hosts entry that allows traffic from any private IP addresses:
config.hosts << "www.mydomain.com"
config.hosts << /\A10\.\d+\.\d+\.\d+\z/
Upvotes: 4
Reputation: 142
I can't notice a specific error whereby your implementation failed after switching the mode RAILS_ENV = production
. But after checking the error 403 I was able to find a hack, which seems that it worked for some users in their use case, you can try that by leaving your code in yaml like,
readinessProbe:
httpGet:
path: "/"
port: 3000
scheme: "HTTP"
initialDelaySeconds: 30
periodSeconds: 15
Even though I was not able to find an error on your deploy, the error could be directed to your credentials, so validate the route that you put in Rails and the permissions it has and identify if these change depending on whether it is in production or development environment.
as a last option would be to clarify your suspicions with the Rails app, because I don't see what affects when changing the environment variable to Production.
Upvotes: 2