Reputation: 85
I trained & exported the Iris Classifier from this guide. I exported it by adding the following to premade_estimator.py:
feature_spec = tf.feature_column.make_parse_example_spec(my_feature_columns)
serving_input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
classifier.export_saved_model("iris_export_base", serving_input_receiver_fn)
I'm able to get inferences using the REST API like so:
import requests
response = requests.post('http://localhost:8501/v1/models/foo:classify',
json={"examples": [{"SepalLength": 2.3,
"SepalWidth": 3.4,
"PetalLength": 2.2,
"PetalWidth": 0.81}]})
I've also been able to successfully get inferences with other models using gRPC, like this object detection model which takes as input an image as an array:
channel = grpc.insecure_channel(SERVER_ADDR)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = MODEL_SPEC_NAME
request.inputs['inputs'].CopyFrom(tf.contrib.util.make_tensor_proto(image_ary))
result = stub.Predict(request, 10.0)
But I can't figure out how I'm supposed to specify the inputs for a ClassificationRequest. My best guess is something along these lines:
channel = grpc.insecure_channel(SERVER_ADDR)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = classification_pb2.ClassificationRequest()
request.model_spec.name = MODEL_SPEC_NAME
request.input #...?
But I can't find any information about how to set the input, and everything I've tried so far throws some kind of TypeError.
Upvotes: 3
Views: 309
Reputation: 36
You can find an example of specifying the input here: https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/tensorflow_model_server_test.py#L354:
example = request.input.example_list.examples.add() example.features.feature['x'].float_list.value.extend([2.0])
Upvotes: 2