Reputation: 2707
I have a k8s service that is behind a load balancer and exposes a /metrics
endpoint. However, these metrics are identical for each pod, so there is no need to collect them from each pod - rather, any pod can provide the data. Currently, this results in the same metrics being emitted with only the pod dimension changing.
What would be the idiomatic way to handle this? My first thought was to create a pseudo-endpoint that points to the service and collect from that, but this seems overly complicated.
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
labels:
app: {{ .Values.name }}-service
spec:
ports:
- name: http
port: 80
protocol: TCP
type: LoadBalancer
selector:
app: {{ .Values.name }}-service-pod
Upvotes: 3
Views: 451
Reputation: 324
Firstly, let me say that is not neccesarily the case that every pod will return the same metrics; pods might return stats on their CPU utilisation, individual jobqueues, a whole load of pod-specific stuff. If this is your own product and they currently don't; they might in the future. It's certainly useful to be able to use your metrics to spot issues in the running of a workload in kubernetes, i.e. monitor the individual pods.
Idiomatically, monitoring all pods is the way to go.
Upvotes: 3