user4572254
user4572254

Reputation: 196

Can Google's AutoML export trained models for offline inference?

AutoML seems great. One big question is that - can we export the trained model for offline inference, such as with tensorflow or tensoflow lite?

Upvotes: 14

Views: 8027

Answers (4)

Peter Gibson
Peter Gibson

Reputation: 19574

EDIT: It's now posible to export both Image Classification and Object Detection Models. See https://cloud.google.com/vertex-ai/docs/export/export-edge-model#object-detection

Original Answer Follows

Current status (August 2019) for AutoML Vision is that you can export AutoML image classification models but not object detection. This feature is in beta (as is AutoML Vision itself). I couldn't find details for other AutoML products and haven't tried them myself, so I'm unsure of their status.

From https://cloud.google.com/vision/automl/docs/

AutoML Vision Edge now allows you to export your custom trained models.

  • AutoML Vision Edge allows you to train and deploy low-latency, high accuracy models optimized for edge devices.
  • With Tensorflow Lite, Core ML, and container export formats, AutoML Vision Edge supports a variety of devices.
  • Hardware architectures supported: Edge TPUs, ARM and NVIDIA.
  • To build an application on iOS or Android devices you can use AutoML Vision Edge in ML Kit. This solution is available via Firebase and offers an end-to-end development flow for creating and deploying custom models to mobile devices using ML Kit client libraries.

Documentation https://cloud.google.com/vision/automl/docs/edge-quickstart

I trained a classification model, exported the tflite model (it exports to Cloud storage), and was able to download the model files and load them into tensorflow using the Python API without too much hassle. Here's the relevant code for loading the model and running inference:

Based on https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python

# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=MODEL_PATH)
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

def predict(frame):
    interpreter.set_tensor(input_details[0]['index'], frame)
    interpreter.invoke()

    # The function `get_tensor()` returns a copy of the tensor data.
    # Use `tensor()` in order to get a pointer to the tensor.
    output_data = interpreter.get_tensor(output_details[0]['index'])

Upvotes: 3

N8allan
N8allan

Reputation: 2268

This is not supported as of March 2019. If you are interested in this feature, star this request: https://issuetracker.google.com/issues/113122585

Also check that link in case Google has implemented the feature since this answer.

Update: initial support has been added for classification, but not yet detection. See Peter Gibson's answer.

Upvotes: 5

Meelpeer
Meelpeer

Reputation: 21

This should be it:

https://cloud.google.com/vision/automl/docs/deploy

Note, the export options (at least currently) do not appear on your already trained models. You have to select one of the models, train and then you get the option to either leave the model in the cloud or generate an edge version.

You can export an image classification model in either generic Tensorflow Lite format, Edge TPU compiled TensorFlow Lite format, or TensorFlow format to a Google Cloud Storage location using the ExportModel API.

Upvotes: 2

Awais
Awais

Reputation: 21

it is not yet possible to export the models from AutoML. @Infinite Loops, automl and ml engine are different products.

Upvotes: 0

Related Questions