Reputation: 19
I want to predicting using Batch Prediction in Vertex AI. My script runs normally before and has no issue, but somehow a few days ago, without any changes in my script, the batch prediction elapsed time longer than my usual prediction.
My data input for batch prediction, already looked like this documentation here.
{"content": "gs://bucket_name/folder/sub_folder/JPEG_01.jpg", "mimeType": "image/jpeg"}
{"content": "gs://bucket_name/folder/sub_folder/JPEG_02.jpg", "mimeType": "image/jpeg"}
My script for batch prediction using script also from documentation above, looked like this:
def create_batch_prediction_job(job_display_name, data_input_path, result_prediction_path, model_resource_name):
aiplatform.init(project=project_id, location=region_id)
model = aiplatform.Model(model_resource_name)
batch_prediction_job = model.batch_predict(
job_display_name = job_display_name,
gcs_source = data_input_path,
gcs_destination_prefix = result_prediction_path,
sync = True,
)
batch_prediction_job.wait()
print(f"Batch Prediction Job Name: {job_display_name}")
print(f"Batch Prediction Job : {batch_prediction_job.resource_name}")
print(f"\nState")
print(batch_prediction_job.state)
return batch_prediction_job
Here is my proof. Success running for 7,503 images:
Cancelled running for only 20 images, because longer elapsed time than 7,503 images:
I've check the VIEW DETAILS
and the View Logs
it gave me nothing. Do someone has facing the same issue? or know the solution?
I've also raised the issue on Google Issue Tracker here, and waiting for response
Upvotes: 0
Views: 58
Reputation: 19
Comeback to answer my question, after a few days without doing anything in my Batch Prediction script. I've raised the issue to Google Issue Tracker, I think this is a feature bug, because I've run my script a few days ago and its worked flawlessly within my common elapsed time.
Here is the proof I could attach
Upvotes: 0
Reputation: 1
AutoML object detection can be approached by considering several factors that influence performance. If optimizing the current setup doesn't yield the desired efficiency, consider deploying the model for online predictions. Online predictions handle individual requests in real-time, which can be more efficient for smaller datasets or when immediate results are required. However, this approach may not be suitable for large-scale batch processing due to potential scalability constraints.
By systematically addressing these areas, you can enhance the efficiency of your batch predictions in AutoML object detection tasks.
Upvotes: -1