Reputation: 11
I want to use grafana and prometheus to monitor some ML models in production. I already have a connector that exports metrics stored in MLFlow and makes them visible to prometheus. I can now query these metrics on prometheus but they are all showed at the current query time.
So far I have managed to provide a different timestamp than just query time by creating a custom Gauge instance.
class MyGauge(Gauge):
def __init__(self, *args, timestamp=None, **kwargs):
super().__init__(*args, **kwargs)
self._timestamp = timestamp
def collect(self):
metric = self._get_metric()
for suffix, labels, value, timestamp, exemplar in self._samples():
metric.add_sample(self._name + suffix, labels, value, timestamp, exemplar)
return [metric]
def _child_samples(self):
return (Sample('', {}, self._value.get(), **int((datetime.now() - timedelta(hours=12)).timestamp())**, None),)
This works when the time is shifted up to some hours (around 5-6h) with respect to the current time. But as soon as I shift more than 12h, the data points will not appear in prometheus (when I query the metric it results in "empty query result").
Is there any way I can populate a Gauge metric with old values to display data that was generated some months ago? Should I use a different type of metric?
Upvotes: 1
Views: 1206
Reputation: 13341
My understanding is that Prometheus disregards metrics with too old timestamps. I believe similar issue was discussed here.
But you can use backfilling to one time push your historic data to Prometheus. But beware of the storage.tsdb.retention.time
option, as your data will be deleted shortly if it is out of retention scope.
Or you could try and use some backfilling solutions for Prometheus. For example this (no affiliation, no guaranties if it works)
Upvotes: 0