xtr33me
xtr33me

Reputation: 1106

Tensorflow serving grpc client error 12

I am currently trying to serve a simple model via tensorflow serving and then I want to call into it via gRRC using node.js. I felt the easiest way to learn/understand this would be to break it down to the simplest model possible. Please forgive the naming as I originally started doing this with a Mnist tutorial but I wasn't successful there either. So the name still says mnist, but it is just a simple calculation implementation.

I created and exported the model with the below code: -- Simple Model --

x = tf.placeholder(tf.float32, shape=(None))
y = tf.placeholder(tf.float32, shape=(None))
three = tf.Variable(3, dtype=tf.float32)
z = tf.scalar_mul(three, x) + y

-- Export --

model_version = 1
path = os.path.join("mnist_test", str(model_version))
builder = tf.python.saved_model.builder.SavedModelBuilder(path)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    builder.add_meta_graph_and_variables(
    sess,
    [tf.python.saved_model.tag_constants.SERVING],
    signature_def_map = {
        "test_mnist_model": tf.saved_model.signature_def_utils.predict_signature_def(
            inputs={"xval": x, "yval":y},
            outputs={"spam":z})
    })
    builder.save()

The message in the end when I ran this seems to be successful:

INFO:tensorflow:No assets to save. INFO:tensorflow:No assets to write. INFO:tensorflow:SavedModel written to: b'mnist_test/3/saved_model.pb'

So I then run tensorflow server and point it to my model via the below line and the server states it is runnign at 0.0.0.0:9000:

../../bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --model_base_path=mnist_test --model_name=calctest --port=9000

I then proceeded to setup the .proto file and it contains this:

syntax = "proto3";

option java_multiple_files = true;
option java_package = "io.grpc.examples.mnisttest";
option java_outer_classname = "MnistTestProto";
option objc_class_prefix = "MNT";

package mnisttest;

// The greeting service definition.
service Greeter {
  // Sends a greeting
  rpc test_mnist_model (InputRequest) returns (OutputReply) {}

}

// The request message containing the user's name.
message InputRequest {
  float xval = 1;
  float yval = 2;
}

// The response message containing the greetings
message OutputReply {
  float spam = 1;
}

Finally I setup a mnistclient.js file which I run under node.js and it contains the below code:

var grpc = require('grpc')
var PROTO_PATH = __dirname + '/../../protos/mnisttest.proto';

module.exports = (connection) => {
    var tensorflow_serving = grpc.load(PROTO_PATH).mnisttest;//.serving;
    console.log(tensorflow_serving);

    var client = new tensorflow_serving.Greeter(
        connection, grpc.credentials.createInsecure()
    );

    return { 
        test: () => {
            console.log(client);
            return client.testMnistModel({xval:5.0,yval:6.0}, function(err, response){
                if(err){
                    console.log("Error: ",JSON.stringify(err));
                    return {Err: JSON.stringify(err)};
                }
                console.log('Got message ', response);
            });
        }
    }
};

function main() {
    var cli = module.exports('localhost:9000')
    cli.test();
}

if( require.main === module){
    main();
}

With the Model running on tf server, when I run the client under node.js I get the below error. I am also printing out the info under client, but when I looked up what the error code 12 meant, it stated the following: Operation is not implemented or not supported/enabled in this service

I have been at this for quite some time and I am assuming that there is just some piece of this that I am blatantly missing. Is anyone able to provide any insight as to why I can't get this simple call into the model working?

I have not ever been able to get a TF model served up yet and thought taking this simple approach would work best, however I can't even get this to work. Any help on this would be a great help! Thanks in advance!

{ InputRequest:
   { [Function: Message]
     encode: [Function],
     decode: [Function],
     decodeDelimited: [Function],
     decode64: [Function],
     decodeHex: [Function],
     decodeJSON: [Function] },
  OutputReply:
   { [Function: Message]
     encode: [Function],
     decode: [Function],
     decodeDelimited: [Function],
     decode64: [Function],
     decodeHex: [Function],
     decodeJSON: [Function] },
  Greeter: { [Function: Client] service: { testMnistModel: [Object] } } }
Client { '$channel': Channel {} }
Error:  {"code":12,"metadata":{"_internal_repr":{}}}

Upvotes: 1

Views: 1395

Answers (1)

Noah
Noah

Reputation: 376

It looks like you have defined a service interface proto (mnisttest.proto), which can be useful when creating a custom server. However, the TensorFlow Serving Model Server supports a service with defined endpoints. In other words, you are speaking to a custom service "Greeter" that does not exist on the Model Server.

Please take a look at the Model Server's API/Service: apis/prediction_service.proto. You most likely want the Predict API: apis/predict.proto.

The Predict API uses the model signature you defined at export time, so you will need to pass in tensors for "xval" and "yval", and fetch the "spam" tensor.

Hope this helps! Thanks, Noah

Upvotes: 2

Related Questions