Patrick
Patrick

Reputation: 2719

How to send a batch of images to Tensorflow server?

I've deployed an object detection model on Tensorflow Serving (SSD architecture) and I can request the model with something like:

data = {"signature_name": "serving_default", "instances": [{"input_tensor": {"b64": b64}}]}
url = '%s/v1/models/mymodel:predict' % MODEL_BASE_URL
response = requests.post(url, headers=headers,data=json.dumps(data))
preds = response.json()['predictions']

where b64 is a base64 encoding of my image. My result has a length of one and contains all the scores, detection_boxes, etc.

I want now to submit a batch of images. My call is the same except for data that is now something like:

data = {"signature_name": "serving_default", "instances": [{"input_tensor": {"b64": b64A}},{"input_tensor": {"b64": b64B}},...]}

where b64A is the encoding of imageA, b64B of image B, etc. I was expecting a result with a length equal to the size of my batch but it's still of size 1. Where is my error?

Upvotes: 1

Views: 212

Answers (1)

Arghya Ganguly
Arghya Ganguly

Reputation: 11

Please try with:-

data = {"signature_name": "serving_default", "instances": [{"b64": "b64A"},{"b64": "b64B"},...]}

Please follow this link(Data encoding) :-

https://cloud.google.com/ai-platform/prediction/docs/reference/rest/v1/projects/predict#request-body

Upvotes: 1

Related Questions