Reputation: 15066
I have created a prometheus and grafana setup in Kubernetes like described here: https://github.com/ContainerSolutions/k8s-deployment-strategies.
Install prometheus:
helm install \
--namespace=monitoring \
--name=prometheus \
--set server.persistentVolume.enabled=false \
--set alertmanager.persistentVolume.enabled=false \
stable/prometheus
Setup grafana:
helm install \
--namespace=monitoring \
--name=grafana \
--version=1.12.0 \
--set=adminUser=admin \
--set=adminPassword=admin \
--set=service.type=NodePort \
stable/grafana
kubectl get pods -n monitoring
NAME READY STATUS RESTARTS AGE
grafana-6d4f6ff6d5-vw8r2 1/1 Running 0 17h
prometheus-alertmanager-6cb6bc6b7-76fs4 2/2 Running 0 17h
prometheus-kube-state-metrics-5ff476d674-c7mpt 1/1 Running 0 17h
prometheus-node-exporter-4zhmk 1/1 Running 0 17h
prometheus-node-exporter-g7jqm 1/1 Running 0 17h
prometheus-node-exporter-sdnwg 1/1 Running 0 17h
prometheus-pushgateway-7967b4cf45-j24hx 1/1 Running 0 17h
prometheus-server-5dfc4f657d-sl7kv 2/2 Running 0 17h
I can curl from inside the grafana container to my prometheus container http://prometheus-server (It replies "found").
Grafana config:
Name: prometheus
Type: Prometheus
http://prometheus-server
I see metrics in a default dashboard called Prometheus 2.0 stats. I've created an own dashboard (also described in the github link).
sum(rate(http_requests_total{app="my-app"}[5m])) by (version)
I've deployed my-app which is running and I curl it a lot but see nothing in my dashboard.
kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-7bd4b55cbd-8zm8b 1/1 Running 0 17h
my-app-7bd4b55cbd-nzs2p 1/1 Running 0 17h
my-app-7bd4b55cbd-zts78 1/1 Running 0 17h
curl
while sleep 0.1; do curl http://192.168.50.10:30513/; done
Host: my-app-7bd4b55cbd-8zm8b, Version: v2.0.0
Host: my-app-7bd4b55cbd-zts78, Version: v2.0.0
Host: my-app-7bd4b55cbd-nzs2p, Version: v2.0.0
Host: my-app-7bd4b55cbd-8zm8b, Version: v2.0.0
Host: my-app-7bd4b55cbd-zts78, Version: v2.0.0
Host: my-app-7bd4b55cbd-zts78, Version: v2.0.0
Host: my-app-7bd4b55cbd-8zm8b, Version: v2.0.0
Host: my-app-7bd4b55cbd-8zm8b, Version: v2.0.0
How can I debug this or what am I doing wrong?
Update: my-app deployment config
kubectl describe deployment my-app
Name: my-app
Namespace: default
CreationTimestamp: Tue, 02 Apr 2019 22:17:31 +0200
Labels: app=my-app
Annotations: deployment.kubernetes.io/revision: 2
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"my-app"},"name":"my-app","namespace":"default"},...
Selector: app=my-app
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=my-app
version=v2.0.0
Annotations: prometheus.io/port: 9101
prometheus.io/scrape: true
Containers:
my-app:
Image: containersol/k8s-deployment-strategies
Ports: 8080/TCP, 8086/TCP
Host Ports: 0/TCP, 0/TCP
Liveness: http-get http://:probe/live delay=5s timeout=1s period=5s #success=1 #failure=3
Readiness: http-get http://:probe/ready delay=0s timeout=1s period=5s #success=1 #failure=3
Environment:
VERSION: v2.0.0
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: my-app-7bd4b55cbd (3/3 replicas created)
Events: <none>
yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
strategy:
type: Recreate
# The selector field tell the deployment which pod to update with
# the new version. This field is optional, but if you have labels
# uniquely defined for the pod, in this case the "version" label,
# then we need to redefine the matchLabels and eliminate the version
# field from there.
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v2.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
Upvotes: 2
Views: 2548
Reputation: 427
In your deployment you said to scrape port 9101 but you did not publish this port on your container.
Where is your prometheus endpoint on port 9101 or on 8080/8086?
Upvotes: 3