CyberPunk
CyberPunk

Reputation: 1447

Error in prediction in ml-engine with python

I am using ml engine for online predictions and have successfully deployed the model. When I used gcloud predict command :

gcloud ml-engine predict --model fastercnn --version v5 --json-instances input.json

It gives the desired predictions. However when I use python google client it gives the following error :

Traceback (most recent call last):
  File "predict.py", line 55, in <module>
    prediction = predict_json('handdetector', 'fastercnn', request)
  File "predict.py", line 35, in predict_json
    response.execute()
  File "/Users/syedmustufainabbasrizvi/.pyenv/versions/sign-language/lib/python3.6/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper
    return wrapped(*args, **kwargs)
  File "/Users/syedmustufainabbasrizvi/.pyenv/versions/sign-language/lib/python3.6/site-packages/googleapiclient/http.py", line 856, in execute
    raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://ml.googleapis.com/v1/projects/handdetector/models/fastercnn/versions/v5:predict?alt=json returned "Bad Request">

I used the following code in python :

def predict_json(project, model, request, version='v5'):
    """Send json data to a deployed model for prediction.

    Args:
        project (str): project where the Cloud ML Engine Model is deployed.
        model (str): model name.
        instances ([Mapping[str: Any]]): Keys should be the names of Tensors
            your deployed model expects as inputs. Values should be datatypes
            convertible to Tensors, or (potentially nested) lists of datatypes
            convertible to tensors.
        version: str, version of the model to target.
    Returns:
        Mapping[str: any]: dictionary of prediction results defined by the
            model.
    """
    # Create the ML Engine service object.
    # To authenticate set the environment variable
    # GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
    service = discovery.build('ml', 'v1')
    name = 'projects/{}/models/{}'.format(project, model)
    if version is not None:
        name += '/versions/{}'.format(version)
    response = service.projects().predict(
        name=name,
        body=request
    )
    print (response.body)
    response.execute()
    if 'error' in response:
        raise RuntimeError(response['error'])

    return response['predictions']

The Json is same in both the commands:

"{\"image_bytes\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsI}}

Model configuration :

The given SavedModel SignatureDef contains the following input(s):
  inputs['image_bytes'] tensor_info:
      dtype: DT_STRING
      shape: (-1)
      name: encoded_image_string_tensor:0
The given SavedModel SignatureDef contains the following output(s):
  outputs['detection_boxes'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 300, 4)
      name: detection_boxes:0
  outputs['detection_classes'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 300)
      name: detection_classes:0
  outputs['detection_features'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, -1, -1, -1, -1)
      name: detection_features:0
  outputs['detection_multiclass_scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 300, 2)
      name: detection_multiclass_scores:0
  outputs['detection_scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 300)
      name: detection_scores:0
  outputs['num_detections'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1)
      name: num_detections:0
  outputs['raw_detection_boxes'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 300, 4)
      name: raw_detection_boxes:0
  outputs['raw_detection_scores'] tensor_info:
      dtype: DT_FLOAT
      shape: (-1, 300, 2)
      name: raw_detection_scores:0
Method name is: tensorflow/serving/predict

ok so I added --log-http to gcloud predict command and the Json request is constructed like this :

{"instances": [{"image_bytes": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCACAAIADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBR......"}}]}

I tried the same Json with POSTMAN and it worked and predictions were generated. When I did the same with in python it again resulted in bad request error. On inspecting the http body in python it resulted in the following :

{"instances": ["{\"image_bytes\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4n.......}}]}

I have a hunch that it adds backslash to the Json which resulted in this error. Can anybody know how to deal with these backslashes ?

Upvotes: 0

Views: 233

Answers (1)

CyberPunk
CyberPunk

Reputation: 1447

Ok so I was able to crack this. The predict method internally converts the request to JSON so you don't have to explicitly convert it into JSON format. I was dumping the request to JSON before supplying to the predict function. so dumping the JSON again resulted in backslashes which messed up the request.

Upvotes: 2

Related Questions