tmandry
tmandry

Reputation: 1375

Updating a docker image in Google Cloud Platform

I thought I had figured out how to do updates of a Docker image in Google Container Engine, but now it just reverts to the original version of the image. Here's what I did:

Original image

docker build -t gcr.io/jupiter-1068/jupiter .
gcloud docker push gcr.io/jupiter-1068/jupiter
kubectl create -f rc.yaml

v2

docker build -t gcr.io/jupiter-1068/jupiter:2 .
gcloud docker push gcr.io/jupiter-1068/jupiter:2
kubectl rolling-update staging --image=gcr.io/jupiter-1068/jupiter:2

This worked. But then I tried updating to v3 in the same way as v2 and it seems to be running the original image code. What's going on?

Update

Tried again with :latest. Output of kubectl describe rc staging:

Name:       staging
Namespace:  default
Image(s):   gcr.io/jupiter-1068/jupiter:latest
Selector:   app=jupiter,deployment=f400f87308696febbe567614f3cc3428,version=1
Labels:     run=staging
Replicas:   1 current / 1 desired
Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.

Output of kubectl describe pod <podname>:

Name:               staging-b4c7103521d97ef91f482db729da9584-0va8i
Namespace:          default
Image(s):           gcr.io/jupiter-1068/jupiter:latest
Node:               gke-staging-4adcf7c5-node-ygf7/10.240.251.174
Labels:             app=jupiter,deployment=f400f87308696febbe567614f3cc3428,version=1
Status:             Running
Reason:
Message:
IP:             10.8.0.24
Replication Controllers:    staging (1/1 replicas created)
Containers:
  jupiter:
    Image:  gcr.io/jupiter-1068/jupiter:latest
    Limits:
      cpu:      100m
    State:      Running
      Started:      Tue, 15 Sep 2015 21:08:32 -0500
    Ready:      True
    Restart Count:  1
Conditions:
  Type      Status
  Ready     True
Events:
  FirstSeen             LastSeen            Count   From                        SubobjectPath               Reason      Message
  Tue, 15 Sep 2015 21:07:05 -0500   Tue, 15 Sep 2015 21:07:05 -0500 1   {scheduler }                                        scheduled   Successfully assigned staging-b4c7103521d97ef91f482db729da9584-0va8i to gke-staging-4adcf7c5-node-ygf7
  Tue, 15 Sep 2015 21:07:05 -0500   Tue, 15 Sep 2015 21:07:05 -0500 1   {kubelet gke-staging-4adcf7c5-node-ygf7}    implicitly required container POD   pulled      Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine
  Tue, 15 Sep 2015 21:07:05 -0500   Tue, 15 Sep 2015 21:07:05 -0500 1   {kubelet gke-staging-4adcf7c5-node-ygf7}    implicitly required container POD   created     Created with docker id 13cd80e199a4
  Tue, 15 Sep 2015 21:07:05 -0500   Tue, 15 Sep 2015 21:07:05 -0500 1   {kubelet gke-staging-4adcf7c5-node-ygf7}    implicitly required container POD   started     Started with docker id 13cd80e199a4
  Tue, 15 Sep 2015 21:07:05 -0500   Tue, 15 Sep 2015 21:07:05 -0500 1   {kubelet gke-staging-4adcf7c5-node-ygf7}    spec.containers{jupiter}        created     Created with docker id 724fdedd11be
  Tue, 15 Sep 2015 21:07:05 -0500   Tue, 15 Sep 2015 21:07:05 -0500 1   {kubelet gke-staging-4adcf7c5-node-ygf7}    spec.containers{jupiter}        started     Started with docker id 724fdedd11be
  Tue, 15 Sep 2015 21:08:32 -0500   Tue, 15 Sep 2015 21:08:32 -0500 1   {kubelet gke-staging-4adcf7c5-node-ygf7}    spec.containers{jupiter}        created     Created with docker id 2022b9f5f054
  Tue, 15 Sep 2015 21:08:32 -0500   Tue, 15 Sep 2015 21:08:32 -0500 1   {kubelet gke-staging-4adcf7c5-node-ygf7}    spec.containers{jupiter}        started     Started with docker id 2022b9f5f054

Upvotes: 1

Views: 4059

Answers (3)

tmandry
tmandry

Reputation: 1375

I manually deleted and recreated the rc/pod and everything works now, including rolling updates. From support:

It appears there was an issue in Container Registry that was preventing v2 of the image from being pulled, but due to the image and pod being deleted we won't be able to investigate further.

If you run into this issue, consider contacting them so they can investigate the issue before deleting your pod.

Upvotes: 0

Jeffrey van Gogh
Jeffrey van Gogh

Reputation: 276

the tag :latest in docker is kind of confusing. it doesn't mean latest upload, it is a default name set when you don't specify a tag.

In your scenario, :latest points to your original image as that is the only upload you didn't specify a label.

Upvotes: 2

Alex Robinson
Alex Robinson

Reputation: 13407

To figure out what's going on, try running kubectl describe rc staging, which will show you the details of the replication controller, including which image it thinks it's running and any events relevant to it. If the output says that the rc is running the new image, then check the pods (using kubectl describe pods <pod-name>) to see which image they're running and if there are any events.

These two commands should hopefully enlighten you as to what's going on, but if not, respond back with the output!

Upvotes: 1

Related Questions