Nice One A
Nice One A

Reputation: 91

Withou onnx, how to convert a pytorch model into a tensorflow model manually?

Since ONNX supports limited models, I tried to do this conversion by assigning parameters directly, but the gained tensorflow model failed to show the desired accuracy. Details are described as follows:

  1. The source model is Lenet trained on MNIST dataset.
  2. I firstly extracted each module and its parameters by model.named_parameters() and save them into a dictionary where the key is the module's name and the value is the parameters
  3. Then, I built and initiated a tensorflow model with the same architecture
  4. Finally, I assign each layer's parameters of pytroch model to the tensorflow model

However, the accuracy of gained tensorflow model is only about 20%. Thus, my question is that is it possible to convert the pytorch model by this method?. If yes, what's the possible issue causing the bad result? If no, then please kindly explain the reasons.

PS: assume the assignment procedure is right.

Upvotes: 1

Views: 3383

Answers (1)

vini_s
vini_s

Reputation: 295

As the comment by jodag mentioned, there are many differences between operator representations in Tensorflow and PyTorch that might cause discrepancies in your workflow.

We would recommend using the following method:

  1. Use the ONNX exporter in PyTorch to export the model to the ONNX format.
import torch.onnx

# Argument: model is the PyTorch model 
# Argument: dummy_input is a torch tensor

torch.onnx.export(model, dummy_input, "LeNet_model.onnx")
  1. Use the onnx-tensorflow backend to convert the ONNX model to Tensorflow.
import onnx

from onnx_tf.backend import prepare

onnx_model = onnx.load("LeNet_model.onnx")  # load onnx model
tf_rep = prepare(onnx_model)  # prepare tf representation
tf_rep.export_graph("LeNet_model.pb")  # export the model

Upvotes: 3

Related Questions