HaKaen
HaKaen

Reputation: 1

Monitor multiple Google Cloud Run Job Executions of a same job

The objective is to monitor asynchronous jobs, which are event triggered using a Google Cloud Function. Every job will be triggered with an argument (or environment variable) corresponding to a directory in Google Cloud Storage. These override arguments are, in some way, used to tailor the cloud run job executions.

I want to set up alerting in my project such that I can retrieve the override arguments used as well.

The issue is that I do not see the override data in the Monitoring interface. EDIT: We can in fact retrieve the overrides arguments in the protoPayload field of the logs, under protoPayload.response.spec.template.spec.containers, which I missed at first. Thanks a lot DazWilkin for your help in the comments.

In the event that the protoPayload is not in the log however, the issue persists. We can see the label (run.googleapis.com/)execution_name, so I believe it is possible to add some execution-specific label to the log, but I am not sure if we can add our own.

I am using the python package google.cloud.run.v2 in my Function to trigger the jobs, so I tried at first to send metadata alongside the run_job request but I do not see where the data ended up, as it seems neither in the execution YAML nor in the Monitoring logs.

def trigger_job(job_name, args, bucket_directory):
    request = {
        "name": job_name,
        "overrides": {
            "container_overrides": 
            [
                {
                    "args": args,
                }
            ], 
        }
    }
    client = run_v2.JobsClient()
    jobrequest = run_v2.RunJobRequest(
        name=request["name"],
        overrides=request["overrides"]
    )
    print(f"triggering job {job_name} with args {args}")
    client.run_job(request=jobrequest, metadata=[("bucket_directory", bucket_directory)])

My next options would have been to either

Upvotes: 0

Views: 120

Answers (0)

Related Questions