Reputation: 928
I performed the following steps.
Created the replication controller with the following config file:
{
"kind":"ReplicationController",
"apiVersion":"v1",
"metadata":{
"name":"fsharp-service",
"labels":{
"app":"fsharp-service"
}
},
"spec":{
"replicas":1,
"selector":{
"app":"fsharp-service"
},
"template":{
"metadata":{
"labels":{
"app":"fsharp-service"
}
},
"spec":{
"containers":[
{
"name":"fsharp-service",
"image":"fsharp/fsharp:latest",
"ports":[
{
"name":"http-server",
"containerPort":3000
}
]
}
]
}
}
}
}
Run the command:
kubectl create -f fsharp-controller.json
Here is the output:
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
cassandra cassandra gcr.io/google-samples/cassandra:v8 app=cassandra 3
fsharp-service fsharp-service fsharp/fsharp:latest app=fsharp-service 1
$ kubectl get pods
NAME READY REASON RESTARTS AGE
cassandra 1/1 Running 0 28m
cassandra-ch1br 1/1 Running 0 28m
cassandra-xog49 1/1 Running 0 27m
fsharp-service-7lrq8 0/1 Error 2 31s
$ kubectl logs fsharp-service-7lrq8
F# Interactive for F# 4.0 (Open Source Edition)
Freely distributed under the Apache 2.0 Open Source License
For help type #help;;
$ kubectl get pods
NAME READY REASON RESTARTS AGE
cassandra 1/1 Running 0 28m
cassandra-ch1br 1/1 Running 0 28m
cassandra-xog49 1/1 Running 0 28m
fsharp-service-7lrq8 0/1 CrashLoopBackOff 3 1m
$ kubectl describe po fsharp-service-7lrq8
W0417 15:52:36.288492 11461 request.go:302] field selector: v1 - events - involvedObject.name - fsharp-service-7lrq8: need to check if this is versioned correctly.
W0417 15:52:36.289196 11461 request.go:302] field selector: v1 - events - involvedObject.namespace - default: need to check if this is versioned correctly.
W0417 15:52:36.289204 11461 request.go:302] field selector: v1 - events - involvedObject.uid - d4dab099-04ee-11e6-b7f9-0a11c670939b: need to check if this is versioned correctly.
Name: fsharp-service-7lrq8
Image(s): fsharp/fsharp:latest
Node: ip-172-20-0-228.us-west-2.compute.internal/172.20.0.228
Labels: app=fsharp-service
Status: Running
Replication Controllers: fsharp-service (1/1 replicas created)
Containers:
fsharp-service:
Image: fsharp/fsharp:latest
State: Waiting
Reason: CrashLoopBackOff
Ready: False
Restart Count: 3
Conditions:
Type Status
Ready False
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Sun, 17 Apr 2016 15:50:50 -0700 Sun, 17 Apr 2016 15:50:50 -0700 1 {default-scheduler } Scheduled Successfully assigned fsharp-service-7lrq8 to ip-172-20-0-228.us-west-2.compute.internal
Sun, 17 Apr 2016 15:50:51 -0700 Sun, 17 Apr 2016 15:50:51 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Created Created container with docker id d44c288ea67b
Sun, 17 Apr 2016 15:50:51 -0700 Sun, 17 Apr 2016 15:50:51 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Started Started container with docker id d44c288ea67b
Sun, 17 Apr 2016 15:50:55 -0700 Sun, 17 Apr 2016 15:50:55 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Started Started container with docker id 688a3ed122d2
Sun, 17 Apr 2016 15:50:55 -0700 Sun, 17 Apr 2016 15:50:55 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Created Created container with docker id 688a3ed122d2
Sun, 17 Apr 2016 15:50:58 -0700 Sun, 17 Apr 2016 15:50:58 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} FailedSync Error syncing pod, skipping: failed to "StartContainer" for "fsharp-service" with CrashLoopBackOff: "Back-off 10s restarting failed container=fsharp-service pod=fsharp-service-7lrq8_default(d4dab099-04ee-11e6-b7f9-0a11c670939b)"
Sun, 17 Apr 2016 15:51:15 -0700 Sun, 17 Apr 2016 15:51:15 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Started Started container with docker id c2e348e1722d
Sun, 17 Apr 2016 15:51:15 -0700 Sun, 17 Apr 2016 15:51:15 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Created Created container with docker id c2e348e1722d
Sun, 17 Apr 2016 15:51:17 -0700 Sun, 17 Apr 2016 15:51:31 -0700 2 {kubelet ip-172-20-0-228.us-west-2.compute.internal} FailedSync Error syncing pod, skipping: failed to "StartContainer" for "fsharp-service" with CrashLoopBackOff: "Back-off 20s restarting failed container=fsharp-service pod=fsharp-service-7lrq8_default(d4dab099-04ee-11e6-b7f9-0a11c670939b)"
Sun, 17 Apr 2016 15:50:50 -0700 Sun, 17 Apr 2016 15:51:44 -0700 4 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Pulling pulling image "fsharp/fsharp:latest"
Sun, 17 Apr 2016 15:51:45 -0700 Sun, 17 Apr 2016 15:51:45 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Created Created container with docker id edaea97fb379
Sun, 17 Apr 2016 15:50:51 -0700 Sun, 17 Apr 2016 15:51:45 -0700 4 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Pulled Successfully pulled image "fsharp/fsharp:latest"
Sun, 17 Apr 2016 15:51:46 -0700 Sun, 17 Apr 2016 15:51:46 -0700 1 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} Started Started container with docker id edaea97fb379
Sun, 17 Apr 2016 15:50:58 -0700 Sun, 17 Apr 2016 15:52:27 -0700 7 {kubelet ip-172-20-0-228.us-west-2.compute.internal} spec.containers{fsharp-service} BackOff Back-off restarting failed docker container
Sun, 17 Apr 2016 15:51:48 -0700 Sun, 17 Apr 2016 15:52:27 -0700 4 {kubelet ip-172-20-0-228.us-west-2.compute.internal} FailedSync Error syncing pod, skipping: failed to "StartContainer" for "fsharp-service" with CrashLoopBackOff: "Back-off 40s restarting failed container=fsharp-service pod=fsharp-service-7lrq8_default(d4dab099-04ee-11e6-b7f9-0a11c670939b)"
What is wrong?
How can I find out the reason why the controller won't start correctly?
UPDATE.
I have tried to change the simple "fsharp/fsharp:latest" image to another image where there would be a service listening to a port, this is how I want to use the container.
The image is called "username/someservice:mytag" and has a service listening to the port 3000.
I run the service as:
mono Service.exe
When I look at the logs I see this:
$ kubectl logs -p fsharp-service-wjmpv
Running on http://127.0.0.1:3000
Press enter to exit
So the container is in the same state even though the process shouldn't exit:
$ kubectl get pods
NAME READY REASON RESTARTS AGE
fsharp-service-wjmpv 0/1 CrashLoopBackOff 9 25m
I also tried to run the container from my image with the -i flag, to make the container not exit, but kubectl doesn't seem to recognize -i flag :\
Any thoughts?
Upvotes: 0
Views: 2726
Reputation: 928
I have added the following line to my F# service (Unix specific code) to make sure process doesn't exit:
let signals = [| new UnixSignal (Signum.SIGINT);
new UnixSignal (Signum.SIGTERM);
new UnixSignal (Signum.SIGQUIT)
|]
let which = UnixSignal.WaitAny (signals, -1);
After that my replication controller is running normally.
Upvotes: 0
Reputation: 297
I would use kubectl logs
to try to find out what has happened to your container, like so:
kubectl logs -p fsharp-service-7lrq8
The -p
flag lets you get logs for the previous startup which is necessary in this case since the container is crashing.
More information: http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/
Upvotes: 3
Reputation: 18210
You are launching a container that immediately exits. The kubelet notices, restarts it, and then it exits again. After this happens a few times, the kubelet slows down the rate at which it tries to launch the container (this is the CrashLoopBackOff state).
The fsharp documentation says to run the container with the -i
flag, which gives an interactive prompt. If you just do
docker run fsharp/fsharp:latest
you'll notice that the container exits immediately and dumps you back into your local shell. This is the way in which you are trying to invoke the container in your cluster, and it is likewise exiting immediately.
Upvotes: 3