Reputation: 765
I've frozen and exported a SavedModel
, which takes as input a batch of videos that has the following format according to saved_model_cli
:
The given SavedModel SignatureDef contains the following input(s):
inputs['ims_ph'] tensor_info:
dtype: DT_UINT8
shape: (1, 248, 224, 224, 3)
name: Placeholder:0
inputs['samples_ph'] tensor_info:
dtype: DT_FLOAT
shape: (1, 173774, 2)
name: Placeholder_1:0
The given SavedModel SignatureDef contains the following output(s):
... << OUTPUTS >> ......
Method name is: tensorflow/serving/predict
I have a TF-Serving (HTTP/REST) server successfully running locally. From my Python client code, I have 2 populated objects of type numpy.ndarray
, named ims
of shape (1, 248, 224, 224, 3) -- and samples
of shape (1, 173774, 2).
I am trying to run an inference against my TF model server (see client code below) but am receiving the following error: {u'error': u'JSON Parse error: Invalid value. at offset: 0'}
# I have tried the following combinations without success:
data = {"instances" : [{"ims_ph": ims.tolist()}, {"samples_ph": samples.tolist()} ]}
data = {"inputs" : { "ims_ph": ims, "samples_ph": samples} }
r = requests.post(url="http://localhost:9000/v1/models/multisensory:predict", data=data)
The TF-Serving REST docs don't seem to indicate that any extra escaping / encoding is required here for these two input tensors. As these aren't binary data, I don't think base64 encoding is the right approach either. Any pointers to a working approach here would be greatly appreciated!
Upvotes: 0
Views: 3206
Reputation: 454
You should send your request like this, json serialize request body first.
r = requests.post(url="http://localhost:9000/v1/models/multisensory:predict", data=json.dumps(data))
Upvotes: 2