Reputation: 21
Using a basic gRPC client from the Tensorflow Serving examples to get predictions from a model running on docker I get this response:
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "OS Error"
debug_error_string = "{"created":"@1580748231.250387313",
"description":"Error received from peer",
"file":"src/core/lib/surface/call.cc",
"file_line":1017,"grpc_message":"OS Error","grpc_status":14}"
This is what my client currently looks like:
import grpc
import tensorflow as tf
import cv2
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
def main():
data = cv2.imread('/home/matt/Downloads/cat.jpg')
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'model'
request.model_spec.signature_name = 'serving_default'
request.inputs['image_bytes'].CopyFrom(
tf.make_tensor_proto(data, shape=[1, data.size]))
result = stub.Predict(request, 10.0) # 10 secs timeout
print(result)
if __name__ == '__main__':
main()
Thanks in advance for any help :)
Upvotes: 2
Views: 10550
Reputation:
Providing the Solution here even though it is present in Comments Section for the benefit of the community.
The solution is that we need to Invoke the Tensorflow Model Server
by running the Docker Container using the code given below, before executing the Client File:
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &
In addition to invoking the Tensorflow Model Server,
8500
is exposed for gRPC
and Port 8501
is exposed for REST API
)Upvotes: 0