Rocket Pingu
Rocket Pingu

Reputation: 621

Getting Input Names from Frozen Tensorflow Estimator Graph?

If a graph's inputs are passed into placeholders:

input_layer = tf.placeholder(tf.float32, [...], name="inputs")

The frozen graph having this input_layer will have an input node named "inputs". How will I know the name of the input node of a frozen Estimator graph? Is it the first layer in the model function? Is it the name of the dictionary key of of the features parameter of a model function?

When I printed the nodes of the graph def generated after freezing, I got this candidate input layer names:

# Generated by the numpy_input_fn
enqueue_input/random_shuffle_queue 
random_shuffle_queue_DequeueMany/n
random_shuffle_queue_DequeueMany

# This is probably the input
inputs/shape
inputs

# More nodes here
...

Update

Here's the graph huge-ass-graph.png

More updates

I checked the guide in using a saved model with estimators and I came up with this code:

input_graph_def = graph.as_graph_def(add_shapes=True)
input_layer = graph.get_operation_by_name('input_layer').outputs[0]
input_shape = input_layer.get_shape().as_list()[1:]
run_params['input_shape'] = input_shape
feature_spec = {'x': tf.FixedLenFeature(input_shape, input_layer.dtype)}

estimator = tf.estimator.Estimator(model_fn=_predict_model_fn,
                                   params=run_params,
                                   model_dir=checkpoint_dir)

def _serving_input_receiver_fn():
    return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()

exported_model_path = estimator.export_savedmodel(checkpoint_dir, _serving_input_receiver_fn)

However, when I run this, I encounter this error:

File "... my module", line ..., in ...
    exported_model_path = estimator.export_savedmodel(checkpoint_dir, _serving_inp
  File "...\tensorflow\python\estimator\estimator.py", line 598, in export_savedmodel
    serving_input_receiver.receiver_tensors_alternatives)
  File "...\tensorflow\python\estimator\export\export.py", line 199, in build_all_signature_defs
    '{}'.format(type(export_outputs)))
ValueError: export_outputs must be a dict and not<class 'NoneType'>

Here's the _predict_model_fn:

def _predict_model_fn(features, mode, params):
    features = features['x']

    # features are passed through layers
    features = _network_fn(features, mode, params)

    # the output layer
    outputs = _get_output(features, params["output_layer"], params["num_classes"])
    predictions = {
        "outputs": outputs
    }

    return _create_model_fn(mode, predictions=predictions)


def _create_model_fn(mode, predictions, loss=None, train_op=None, eval_metric_ops=None, training_hooks=None):
    return tf.estimator.EstimatorSpec(mode=mode,
                                  predictions=predictions,
                                  loss=loss,
                                  train_op=train_op,
                                  eval_metric_ops=eval_metric_ops,
                                  training_hooks=training_hooks)

Apparently, one must provide the export_output argument in the EstimatorSpec to be returned whenever one decides to export their model. With that, the _predict_model_fn has this return statement and adding the argument to the _create_model_fn:

return _create_model_fn(mode, predictions=predictions,
                            export_outputs={
                                "outputs": tf.estimator.export.PredictOutput(outputs)
                            })

Upvotes: 2

Views: 3421

Answers (1)

Sorin
Sorin

Reputation: 11968

There's no way to tell which one is the input or output tensor from a graph.

You should use the SavedModel functions. Part of it is to generate a signature of the model that says exactly which tensor is the input and which one is the output.

You can take the same model and export it with different signatures. For example one would take a protocol buffer and give you a probability back, and another would take a string and give you a space embedding.

Upvotes: 1

Related Questions