Reputation: 11
I'm a new to TF serving
and currently I have such kind of problem. I run server part using bert_en_uncased from TF HUB
, but I don't understand how to implement client side correctly. I faced with a couple of articles but each of them assumes that I have a ready-made fine-tuned model with pre-assigned handlers for requests. Can anyone share some tutors or maybe API references to facilitate my task?
Some of articles I have read:
PS. I'm not trying to create QA model or something like that, I just need BERT embeddings from this particular model.
Upvotes: 0
Views: 231
Reputation: 11
UPD: a've already solved this problem. The main thing was, TF.HUB
model don't have any spec list or something like that, only some documentation of how you can use it with tf.hub
. If you faced with similar problem do I recommend to do the following things:
1) Install/compile from source SavedModelCli
, it's TensorFlow's tool to, let's say, unpack saved models and get it's specs;
2) Find some guides on TF Serving
, just change some code pieces, nearly every implementation is the same;
3) Probably you might (and you WILL, believe me) face with deprecation warnings. Don't try to look for documentation, solution was here :)
Good luck on serving your models!
Upvotes: 1