AVarf
AVarf

Reputation: 5149

Load a model into Tensorflow serving container and use protobufs for communicating with it

I know how to load models into the tensorflow serving container and communicate with it via http request but I am a little confused of how to use protobufs. What are the steps for using protobufs? Shall I just load a model into the container and use something like below:

from tensorflow_serving.apis import 

request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet'
request.model_spec.signature_name = 'serving_default' 

Or after/before loading the model I have to do some extra steps?

Upvotes: 1

Views: 108

Answers (1)

Happy Gene
Happy Gene

Reputation: 492

Here is the sample code for making an inferencing call to gRPC in Python:

resnet_client_grpc.py

In the same folder above, you will find example for calling the REST endpoint.

Upvotes: 1

Related Questions