Reputation: 75
I have a python application that runs periodically. Sometimes I call a second or third instance of the application to run concurrently. It is basically a job. Included in my app are a prometheus Counter
and the initialization of the prometheus_http_server
. Everything works fine, however, when a second instance of my application is ran, the new instance immediately fails because the prometheus web server port is already assigned. That part makes sense, but how do I initialize the prometheus http server to view the metrics scraped from multiple instances of the same application?
If I don't initialize the web server in the same app as my counter metrics, where do my counter metrics go and how can I access them? Can I point my counter metrics somewhere outside of my application? Is there a way for a prometheus server instance to collect metrics from different apps?
I have read the the entire readme for the python client library but it is scant and terse so it has not been very useful. I'm certain there is an elegant way to achieve this but I'm not sure how.
Here is a pseudo-code version of my app:
from prometheus_client import start_http_server, Counter
def time_consuming_task():
c = Counter('name', 'description')
my_process()
my_other_process()
c.inc()
def main():
start_http_server()
time_consuming_task()
if __name__ == '__main__':
main()
Upvotes: 2
Views: 2251
Reputation: 40136
You can specify the port
that start_http_server(port)
binds to when it runs. If you don't specify a value, it defaults to 8000
.
Once the port is bound, it can't be rebound by another process trying to use the same port. Hence the problem you observe.
See the Three Step Demo example.
It's unclear how you're defining my_process
and my_other_process
. If these are indeed separate Python processes, I'm unsure how your code would work as shown in the pseudocode.
If you want to run multiple Python processes, then on a single host, you will need to publish the metrics for each process (!) on a different port.
If you were to run the processes across multiple hosts, each process could potentially use the same port as long as the port is available on that host.
Prometheus always (!?) connects to so-called scrape targets (those publishing metrics) over the network.
You must have a scrape_config
configured in the Prometheus configuration.
If your code is currently working, and publishing metrics (by default) on 8000
, and the Prometheus server is on the same host, then you probably have a static_config
section with a value localhost:8000
(perhaps 127.0.0.1:8000
) that configures Prometheus to scrape the metrics:
scrape_configs:
- job_name: your-python-app
static_configs:
- targets:
- localhost:8000
To use different hosts and ports, you can simply add these entries to the Prometheus static_config
and restart the server:
scrape_configs:
- job_name: your-python-app
static_configs:
- targets:
- localhost:8000
- localhost:8001
- localhost:8002
Upvotes: 2