Reputation: 1
So, I am using torchserve to create a handler and then using the ".mar
" file for creating an inference service using kserve on kubeflow
.
I am able to see the custom metrics locally at 8082/metrics
, but after deploying it, I am not able to see those custom metrics on the same endpoint, I am only able to see 3 metrics which are default I guess (ts_inference_latency_microseconds
, ts_inference_requests_total
, ts_queue_latency_microseconds
). If any one has any idea about this please help me.
Ref - https://github.com/kserve/kserve/blob/master/qpext/README.md#configs
Thank you for your time
Upvotes: 0
Views: 143