shortcipher3
shortcipher3

Reputation: 1380

Custom Model for Batch Prediction on Vertex.ai

I want to run batch predictions inside Google Cloud's vertex.ai using a custom trained model. I was able to find documentation to get online prediction working with a custom built docker image by setting up an endpoint, but I can't seem to find any documentation on what the Dockerfile should be for batch prediction. Specifically how does my custom code get fed the input and where does it put the output?

The documentation I've found is here, it certainly looks possible to use a custom model and when I tried it didn't complain, but eventually it did throw an error. According to the documentation no endpoint is required for running batch jobs.

Upvotes: 2

Views: 1125

Answers (1)

user238607
user238607

Reputation: 2468

You can use the same custom model that you deployed for online prediction to do batch prediction.

Just make sure your input datasource is in the following jsonl, csv format, etc allowed format.

https://cloud.google.com/vertex-ai/docs/predictions/get-batch-predictions#json-lines

Provide the input directory and output gs directory in Batch Predictions in Vertex AI UI.

It will output the predictions in output bucket location

This notebook is a good example of how to creat a custom container.

https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/custom/SDK_Custom_Container_Prediction.ipynb

Upvotes: 0

Related Questions