Reputation: 5149
I know how to load models into the tensorflow serving container and communicate with it via http request but I am a little confused of how to use protobufs. What are the steps for using protobufs? Shall I just load a model into the container and use something like below:
from tensorflow_serving.apis import
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet'
request.model_spec.signature_name = 'serving_default'
Or after/before loading the model I have to do some extra steps?
Upvotes: 1
Views: 108
Reputation: 492
Here is the sample code for making an inferencing call to gRPC in Python:
In the same folder above, you will find example for calling the REST endpoint.
Upvotes: 1