Ajinkya
Ajinkya

Reputation: 1867

Failed to convert tensorflow frozen graph to pbtxt file

I want to extract pbtxt file given an input of tensorflow frozen inference graph. In order to do this I am using the below script :

import tensorflow as tf

#from google.protobuf import text_format
from tensorflow.python.platform import gfile

def converter(filename): 
  with gfile.FastGFile(filename,'rb') as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
    tf.import_graph_def(graph_def, name='')
    tf.train.write_graph(graph_def, 'pbtxt/', 'protobuf.pbtxt', as_text=True)
    print(graph_def)
  return


#converter('ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb')  # here you can write the name of the file to be converted
# and then a new file will be made in pbtxt directory.

converter('ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb')

As an example, I am using ssd mobilenet architecture. Using the above code I get the output as pbtxt but I cannot use it. For reference see the image below

enter image description here

RIGHT: Image of original pbtxt file of mobile-net architecture

LEFT: Image of pbtxt file obtained by using above script.

When I use The official pbtxt on the RIGHT I get correct results. But, I do not get any prediction when I use LEFT pbtxt which I generated using above script

I am using these predictions on open cv DNN module

tensorflowNet = cv2.dnn.readNetFromTensorflow('ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb', 'pbtxt/protobuf.pbtxt')

How do I convert mobilenet frozen inference graph into proper pbtxt format so that I can get inference ?

References: https://gist.github.com/Arafatk/c063bddb9b8d17a037695d748db4f592

Upvotes: 1

Views: 5555

Answers (4)

tttzof351
tttzof351

Reputation: 391

Convert pb to pbtxt for TF 2.xxx:

import tensorflow as tf
from google.protobuf import text_format
from tensorflow.python.platform import gfile

def graphdef_to_pbtxt(filename): 
    with open(filename,'rb') as f:
        graph_def = tf.compat.v1.GraphDef()
        graph_def.ParseFromString(f.read())
    with open('protobuf.txt', 'w') as fp:
        fp.write(str(graph_def))
    
graphdef_to_pbtxt('saved_model.pb')

Upvotes: 0

user13721506
user13721506

Reputation: 1

Might help someone. Met the same problem with mars-small128.pb for OpenCV 4.3.0 pulled from master

import argparse
import tensorflow as tf
from tensorflow.python.saved_model import signature_constants

def save(graph_pb, export_dir):
builder = tf.saved_model.builder.SavedModelBuilder(export_dir)

with tf.gfile.GFile(graph_pb, "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

sigs = {}

with tf.Session(graph=tf.Graph()) as sess:
    # INFO: name="" is important to ensure we don't get spurious prefixing
    tf.import_graph_def(graph_def, name='')
    g = tf.get_default_graph()

    # INFO: if name is added the input/output should be prefixed like:
    #       name=net => net/images:0 & net/features:0
    inp = tf.get_default_graph().get_tensor_by_name("images:0")
    out = tf.get_default_graph().get_tensor_by_name("features:0")

    sigs[signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY] = \
        tf.saved_model.signature_def_utils.predict_signature_def(
            {"in": inp}, {"out": out})

    builder.add_meta_graph_and_variables(sess,
                                         [tag_constants.SERVING],
                                         signature_def_map=sigs)

builder.save(as_text=True)


if __name__ == '__main__':
    # export_dir = './saved'
    # graph_pb = '../models/deep_sort/mars-small128.pb'

    parser = argparse.ArgumentParser()
    parser.add_argument('--input', help="path to frozen pb file")
    parser.add_argument('--output', help="Folder to save")
    args = parser.parse_args()
    if args.input is not None and args.output:
        save(args.input, args.output)
    else:
        print(f"Usage adapt_opencv.py.py --input 'path_to_bp' --output './saved'")

Upvotes: 0

Ajinkya
Ajinkya

Reputation: 1867

Heres what worked for me

  • git clone https://github.com/opencv/opencv.git
  • Navigate to opencv/samples/dnn/
  • Copy frozen_inference_graph.pb, and *.config file corresponding to your pb file
  • Paste the copied files in opencv/samples/dnn directory
  • Make a new folder in the den directory and name it "exported_pbtxt"

And run this script:

python3 tf_text_graph_ssd.py --input frozen_inference_graph.pb --output exported_pbtxt/output.pbtxt --config pipeline.config

enter image description here

That’s all you need, now copy the frozen inference graph and newely generated pbtxt file. And, use the following script to run your model using OpenCV:

import cv2

# Load a model imported from Tensorflow
tensorflowNet = cv2.dnn.readNetFromTensorflow('card_graph/frozen_inference_graph.pb', 'exported_pbtxt/output.pbtxt')

# Input image
img = cv2.imread('image.jpg')
rows, cols, channels = img.shape

# Use the given image as input, which needs to be blob(s).
tensorflowNet.setInput(cv2.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False))

# Runs a forward pass to compute the net output
networkOutput = tensorflowNet.forward()

# Loop on the outputs
for detection in networkOutput[0,0]:

    score = float(detection[2])
    if score > 0.9:

        left = detection[3] * cols
        top = detection[4] * rows
        right = detection[5] * cols
        bottom = detection[6] * rows

        #draw a red rectangle around detected objects
        cv2.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (0, 0, 255), thickness=2)

# Show the image with a rectagle surrounding the detected objects 
cv2.imshow('Image', img)
cv2.waitKey()
cv2.destroyAllWindows()

Upvotes: 3

Dmitry Kurtaev
Dmitry Kurtaev

Reputation: 833

Please follow this guide: https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API. There is no sense to create a .pbtxt without modifying it. The script from guide creates an extra text graph which is used for import to OpenCV.

Upvotes: 0

Related Questions