Reputation: 3565
I have tried many method to build my rails app to a docker image. And deploy it to google container engine. But until now, no one success.
My Dockerfile(Under rails root path)
FROM ruby:2.2.2
RUN apt-get update -qq && apt-get install -y build-essential
RUN apt-get install -y nodejs
ENV APP_HOME /myapp
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
ADD Gemfile $APP_HOME/Gemfile
ADD Gemfile.lock $APP_HOME/Gemfile.lock
ADD vendor/gems/my_gem $APP_HOME/vendor/gems/my_gem
ADD init.sh $APP_HOME/
RUN export LANG=C.UTF-8 && bundle install
ADD . $APP_HOME
CMD ["sh", "init.sh"]
My init.sh
#!/bin/bash
bundle exec rake db:create db:migrate
bundle exec rails server -b 0.0.0.0
My kubernetes config file
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- name: web
image: gcr.io/my-project-id/myapp:v1
ports:
- containerPort: 3000
name: http-server
env:
- name: RAILS_ENV
value: "production"
After I create web controller on gke with kubectl:
kubectl create -f web-controller.yml
and see the pod logs:
kubectl logs web-controller-xxxxx
it shows:
init.sh: 2: init.sh: bundle: not found
init.sh: 3: init.sh: bundle: not found
It seems the path not found. Then how to do?
Upvotes: 0
Views: 1383
Reputation: 3133
Maybe you should execute your init.sh
directly instead of sh init.sh
? It would appear that the $PATH
and maybe other ENV variables are not getting set for that sh init.sh
shell. If you can exec
into the container and which bundle
shows the path to bundle, then you're losing your login ENVs when executing with sh init.sh
.
If it helps at all, I've written a how-to on deploying Rails on GKE with Kubernetes. One thing you may want to change is that if you have several of your web pods running, they will all run the init.sh
script and they will all attempt to db:migrate
. There will be a race condition for which one migrates and in what order (if you have many). You probably only want to run db:migrate
from one container during a deploy. You can use a Kubernetes Job to accomplish that or kubectl run migrator --image=us.gcr.io/your/image --rm --restart=Never
or the like to execute the db:migrate
task just once before rolling out your new web pods.
Upvotes: 2
Reputation: 845
You can use kubectl exec to enter your container and print the environment. http://kubernetes.io/v1.1/docs/user-guide/getting-into-containers.html
For example: kubectl exec web-controller-xxxxx sh -c printenv
You could also use kubectl interactively to confirm that bundle is in your container image:
kubectl exec -ti web-controller-xxxxx sh
If bundle is in your image, then either add its directory to PATH in init.sh, or specify its path explicitly in each command.
Upvotes: 1