Reputation: 199
Goal: serve prediction requests from a Vertex AI Endpoint by executing custom prediction logic.
Detailed steps: For example, we may already uploaded have an image_quality.pb model (developed in a non-vertex-ai pythonic environment) in a GCS bucket
Next, we want to create a custom image inference logic by deserializing the deployed model and serving the inference functionality in a vertex AI endpoint
Finally, we want to pass a list of images (stored in another GCS bucket) to that endpoint.
We also want to see the logs and metrics in tensorboard.
Existing Vertex AI code samples provide examples for invoking model.batch_predict / endpoint. predict, but don't mention how to execute custom prediction code.
It would be great if someone can provide guidelines and links to documents/code in order to implement the above steps.
Thanks in advance!
Upvotes: 0
Views: 206